title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 54. Configuring Oracle WebLogic Server for KIE Server | Chapter 54. Configuring Oracle WebLogic Server for KIE Server Before you deploy KIE Server with Oracle WebLogic Server, you must configure system properties, security settings, JMS requirements, and other properties on Oracle WebLogic Server. These configurations promote an optimal integration with KIE Server. Prerequisites Oracle WebLogic Server is installed and running. You are logged in to the WebLogic Administration Console. 54.1. Configuring the KIE Server group and users You must assign users to a kie-server group in the WebLogic Administration Console to enable the container-managed authentication mechanisms in Oracle WebLogic Server. Procedure In the WebLogic Administration Console, click Security Realms . Choose your desired security realm or click New to create a new security realm. Navigate to Users and Groups Groups New and create the kie-server group. Navigate to Users New and create a new user. Enter a user, such as server-user , and a password for this new user and click OK . Important Make sure that the selected user name does not conflict with any known title of a role or a group. For example, if there is a role called kie-server , then do not create a user with the user name kie-server . Click the newly created user, then return to the Groups tab. Use the selection tool to move the kie-server group from the Available field to the Chosen field, and click Save . 54.2. Configuring Java Message Service (JMS) The Java Message Service (JMS) is a Java API that KIE Server uses to exchange messages with other application servers such as Oracle WebLogic Server and IBM WebSphere Application Server. You must configure your application server to send and receive JMS messages through KIE Server to ensure collaboration between the two servers. 54.2.1. Create a JMS server Create a JMS server to use JMS with KIE Server and Oracle WebLogic Server. Procedure In the WebLogic Administration Console, navigate to Services Messaging JMS Servers . Click New to create a new JMS server. Enter a name for your JMS server and click . Select the target server chosen for the KIE Server deployment. Click Finish . 54.2.2. Create a JMS module You must create a JMS module to store your JMS resources, such as connection factories and queues. Prerequisites You have created a JMS server. Procedure In the WebLogic Administration Console, navigate to Services Messaging JMS Modules . Click New to create a module. Enter a module name and click . Select the target server chosen for the KIE Server deployment and click Finish . Click the newly created module name and then click Subdeployments . Click New to create a subdeployment for your module. Give your subdeployment a name and click . Select the check box to choose the previously created JMS server. Click Finish to complete the subdeployment configuration. 54.2.3. Create JMS connection factories To enable messaging with KIE Server, you must create certain JMS connection factories for sending and receiving messages. Prerequisites You have created a JMS server. You have created a JMS module. Procedure In the WebLogic Administration Console, navigate to Services Messaging JMS Modules to see a list of JMS modules. Select your previously created module and click New to create a new JMS resource. Select Connection Factory and click . For each of rthe equired connection factories listed in the following table, enter the name of the connection factory (for example, KIE.SERVER.REQUEST ) and the JNDI name (for example, jms/cf/KIE.SERVER.REQUEST ) and click . The connection factory automatically selects the servers assigned to the JMS Module as the default. Table 54.1. Required JMS connection factories for KIE Server Name Default value Used for KIE.SERVER.REQUEST jms/cf/KIE.SERVER.REQUEST Sending all requests to KIE Server KIE.SERVER.RESPONSE jms/cf/KIE.SERVER.RESPONSE Receiving all responses produced by KIE Server Click Finish to add the connection factory, and repeat for each required factory. 54.2.4. Create JMS queues JMS queues are the destination end points for point-to-point messaging. You must create certain JMS queues to enable JMS messaging with KIE Server. Prerequisites You have created a JMS server. You have created a JMS module. Procedure In the WebLogic Administration Console, navigate to Services Messaging JMS Modules to see the list of JMS modules. Select your previously created module, then click New to create a new JMS resource. Select Queue and click . For each of the required queues listed in the following table, enter the name of the queue (for example, KIE.SERVER.REQUEST ) and the JNDI name (for example, jms/KIE.SERVER.REQUEST ) and then click . Table 54.2. Required JMS queues for KIE Server Name Default value Used for KIE.SERVER.REQUEST jms/KIE.SERVER.REQUEST Sending all requests to KIE Server KIE.SERVER.RESPONSE jms/KIE.SERVER.RESPONSE Receiving all responses produced by KIE Server Choose the JMS module subdeployment that connects to the JMS server. Click Finish to add the queue, and repeat for each required queue. 54.3. Setting system properties in Oracle WebLogic Server Set the system properties listed in this section on your Oracle WebLogic Server before you deploy KIE Server. Procedure Set the following system property to increase the Java Virtual Machine (JVM) memory size: If you do not increase the JVM memory size, Oracle WebLogic Server freezes or causes deployment errors when deploying KIE Server. Specify the following system properties for KIE Server on the Oracle WebLogic Server instance: Table 54.3. System properties for KIE Server Name Value Description kie.server.jms.queues.response jms/KIE.SERVER.RESPONSE The JNDI name of JMS queue for responses used by KIE Server. org.kie.server.domain OracleDefaultLoginConfiguration JAAS LoginContext domain used to authenticate users when using JMS. org.jbpm.server.ext.disabled true Disables Business Central features, which are not supported in RHDM. If not set, KIE Server will work, but will show error messages during start up. org.jbpm.ui.server.ext.disabled true Disables Business Central features, which are not supported in RHDM. If not set, KIE Server will work, but will show error messages during start up. org.jbpm.case.server.ext.disabled true Disables Business Central features, which are not supported in RHDM. If not set, KIE Server will work, but will show error messages during start up. Set the same property values in the JAVA_OPTIONS environment variable: 54.4. Stopping and restarting Oracle WebLogic Server After you have configured all required system properties in Oracle WebLogic Server, stop and restart the Oracle server to ensure that the configurations are applied. Procedure In the WebLogic Administration Console, navigate to Change Center Lock & Edit . Under Domain Structure , click Environment Servers Control . Select the server that you want to stop and click Shutdown . Select When Work Completes to gracefully shut down the server or select Force Shutdown Now to stop the server immediately without completing ongoing tasks. On the Server Life Cycle Assistant pane, click Yes to complete the shutdown. After the shutdown is complete, navigate to the domain directory in the command terminal, WLS_HOME /user_projects/<DOMAIN_NAME> . For example: Enter one of the following commands to restart Oracle WebLogic Server to apply the new configurations: On UNIX-based operating systems: On Windows operating systems: Open the Administration Console in a web browser (for example, http://localhost:7001/console/ ) and log in with your credentials. | [
"USER_MEM_ARGS=-Xms512m -Xmx1024m",
"JAVA_OPTIONS=\"-Dkie.server.jms.queues.response=jms/KIE.SERVER.RESPONSE -Dorg.kie.server.domain=OracleDefaultLoginConfiguration -Dorg.jbpm.server.ext.disabled=true -Dorg.jbpm.ui.server.ext.disabled=true -Dorg.jbpm.case.server.ext.disabled=true\"",
"WLS\\user_projects\\mydomain",
"startWebLogic.sh",
"startWebLogic.cmd"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/wls-configure-proc |
8.2. Cluster Tasks | 8.2. Cluster Tasks Note Some cluster options do not apply to Gluster clusters. For more information about using Red Hat Gluster Storage with Red Hat Virtualization, see Configuring Red Hat Virtualization with Red Hat Gluster Storage . 8.2.1. Creating a New Cluster A data center can contain multiple clusters, and a cluster can contain multiple hosts. All hosts in a cluster must be of the same CPU type (Intel or AMD). It is recommended that you create your hosts before you create your cluster to ensure CPU type optimization. However, you can configure the hosts at a later time using the Guide Me button. Creating a New Cluster Click Compute Clusters . Click New . Select the Data Center the cluster will belong to from the drop-down list. Enter the Name and Description of the cluster. Select a network from the Management Network drop-down list to assign the management network role. Select the CPU Architecture and CPU Type from the drop-down lists. It is important to match the CPU processor family with the minimum CPU processor type of the hosts you intend to attach to the cluster, otherwise the host will be non-operational. Note For both Intel and AMD CPU types, the listed CPU models are in logical order from the oldest to the newest. If your cluster includes hosts with different CPU models, select the oldest CPU model. For more information on each CPU model, see https://access.redhat.com/solutions/634853 . Select the Compatibility Version of the cluster from the drop-down list. Select the Switch Type from the drop-down list. Select the Firewall Type for hosts in the cluster, either iptables or firewalld . Note iptables is deprecated. Select either the Enable Virt Service or Enable Gluster Service check box to define whether the cluster will be populated with virtual machine hosts or with Gluster-enabled nodes. Optionally select the Enable to set VM maintenance reason check box to enable an optional reason field when a virtual machine is shut down from the Manager, allowing the administrator to provide an explanation for the maintenance. Optionally select the Enable to set Host maintenance reason check box to enable an optional reason field when a host is placed into maintenance mode from the Manager, allowing the administrator to provide an explanation for the maintenance. Optionally select the /dev/hwrng source (external hardware device) check box to specify the random number generator device that all hosts in the cluster will use. The /dev/urandom source (Linux-provided device) is enabled by default. Click the Optimization tab to select the memory page sharing threshold for the cluster, and optionally enable CPU thread handling and memory ballooning on the hosts in the cluster. Click the Migration Policy tab to define the virtual machine migration policy for the cluster. Click the Scheduling Policy tab to optionally configure a scheduling policy, configure scheduler optimization settings, enable trusted service for hosts in the cluster, enable HA Reservation, and add a custom serial number policy. Click the Console tab to optionally override the global SPICE proxy, if any, and specify the address of a SPICE proxy for hosts in the cluster. Click the Fencing policy tab to enable or disable fencing in the cluster, and select fencing options. Click the MAC Address Pool tab to specify a MAC address pool other than the default pool for the cluster. For more options on creating, editing, or removing MAC address pools, see Section 1.5, "MAC Address Pools" . Click OK to create the cluster and open the Cluster - Guide Me window. The Guide Me window lists the entities that need to be configured for the cluster. Configure these entities or postpone configuration by clicking the Configure Later button. Configuration can be resumed by selecting the cluster and clicking More Actions ( ), then clicking Guide Me . 8.2.2. General Cluster Settings Explained The table below describes the settings for the General tab in the New Cluster and Edit Cluster windows. Invalid entries are outlined in orange when you click OK , prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values. Table 8.1. General Cluster Settings Field Description/Action Data Center The data center that will contain the cluster. The data center must be created before adding a cluster. Name The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores. Description / Comment The description of the cluster or additional notes. These fields are recommended but not mandatory. Management Network The logical network that will be assigned the management network role. The default is ovirtmgmt . This network will also be used for migrating virtual machines if the migration network is not properly attached to the source or the destination hosts. On existing clusters, the management network can only be changed using the Manage Networks button in the Logical Networks tab in the details view. CPU Architecture The CPU architecture of the cluster. Different CPU types are available depending on which CPU architecture is selected. undefined : All CPU types are available. x86_64 : All Intel and AMD CPU types are available. ppc64 : Only IBM POWER 8 is available. CPU Type The CPU type of the cluster. See CPU Requirements in the Planning and Prerequisites Guide for a list of supported CPU types. All hosts in a cluster must run either Intel, AMD, or IBM POWER 8 CPU type; this cannot be changed after creation without significant disruption. The CPU type should be set to the oldest CPU model in the cluster. Only features present in all models can be used. For both Intel and AMD CPU types, the listed CPU models are in logical order from the oldest to the newest. Compatibility Version The version of Red Hat Virtualization. You will not be able to select a version earlier than the version specified for the data center. Switch Type The type of switch used by the cluster. Linux Bridge is the standard Red Hat Virtualization switch. OVS provides support for Open vSwitch networking features. Firewall Type Specifies the firewall type for hosts in the cluster, either iptables or firewalld . NOTE iptables is deprecated. If you change an existing cluster's firewall type, you must reinstall all hosts in the cluster to apply the change. Default Network Provider Specifies the default external network provider that the cluster will use. If you select Open Virtual Network (OVN), the hosts added to the cluster are automatically configured to communicate with the OVN provider. If you change the default network provider, you must reinstall all hosts in the cluster to apply the change. Maximum Log Memory Threshold Specifies the logging threshold for maximum memory consumption as a percentage or as an absolute value in MB. A message is logged if a host's memory usage exceeds the percentage value or if a host's available memory falls below the absolute value in MB. The default is 95% . Enable Virt Service If this radio button is selected, hosts in this cluster will be used to run virtual machines. Enable Gluster Service If this radio button is selected, hosts in this cluster will be used as Red Hat Gluster Storage Server nodes, and not for running virtual machines. Import existing gluster configuration This check box is only available if the Enable Gluster Service radio button is selected. This option allows you to import an existing Gluster-enabled cluster and all its attached hosts to Red Hat Virtualization Manager. The following options are required for each host in the cluster that is being imported: Address : Enter the IP or fully qualified domain name of the Gluster host server. Fingerprint : Red Hat Virtualization Manager fetches the host's fingerprint, to ensure you are connecting with the correct host. Root Password : Enter the root password required for communicating with the host. Enable to set VM maintenance reason If this check box is selected, an optional reason field will appear when a virtual machine in the cluster is shut down from the Manager. This allows you to provide an explanation for the maintenance, which will appear in the logs and when the virtual machine is powered on again. Enable to set Host maintenance reason If this check box is selected, an optional reason field will appear when a host in the cluster is moved into maintenance mode from the Manager. This allows you to provide an explanation for the maintenance, which will appear in the logs and when the host is activated again. Additional Random Number Generator source If the check box is selected, all hosts in the cluster have the additional random number generator device available. This enables passthrough of entropy from the random number generator device to virtual machines. 8.2.3. Optimization Settings Explained Memory Considerations Memory page sharing allows virtual machines to use up to 200% of their allocated memory by utilizing unused memory in other virtual machines. This process is based on the assumption that the virtual machines in your Red Hat Virtualization environment will not all be running at full capacity at the same time, allowing unused memory to be temporarily allocated to a particular virtual machine. CPU Considerations For non-CPU-intensive workloads , you can run virtual machines with a total number of processor cores greater than the number of cores in the host. Doing so enables the following: You can run a greater number of virtual machines, which reduces hardware requirements. You can configure virtual machines with CPU topologies that are otherwise not possible, such as when the number of virtual cores is between the number of host cores and the number of host threads. For best performance, and especially for CPU-intensive workloads , you should use the same topology in the virtual machine as in the host, so the host and the virtual machine expect the same cache usage. When the host has hyperthreading enabled, QEMU treats the host's hyperthreads as cores, so the virtual machine is not aware that it is running on a single core with multiple threads. This behavior might impact the performance of a virtual machine, because a virtual core that actually corresponds to a hyperthread in the host core might share a single cache with another hyperthread in the same host core, while the virtual machine treats it as a separate core. The table below describes the settings for the Optimization tab in the New Cluster and Edit Cluster windows. Table 8.2. Optimization Settings Field Description/Action Memory Optimization None - Disable memory overcommit : Disables memory page sharing. For Server Load - Allow scheduling of 150% of physical memory : Sets the memory page sharing threshold to 150% of the system memory on each host. For Desktop Load - Allow scheduling of 200% of physical memory : Sets the memory page sharing threshold to 200% of the system memory on each host. CPU Threads Selecting the Count Threads As Cores check box enables hosts to run virtual machines with a total number of processor cores greater than the number of cores in the host. When this check box is selected, the exposed host threads are treated as cores that virtual machines can use. For example, a 24-core system with 2 threads per core (48 threads total) can run virtual machines with up to 48 cores each, and the algorithms to calculate host CPU load would compare load against twice as many potential utilized cores. Memory Balloon Selecting the Enable Memory Balloon Optimization check box enables memory overcommitment on virtual machines running on the hosts in this cluster. When this check box is selected, the Memory Overcommit Manager (MoM) starts ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine. To have a balloon running, the virtual machine needs to have a balloon device with relevant drivers. Each virtual machine includes a balloon device unless specifically removed. Each host in this cluster receives a balloon policy update when its status changes to Up . If necessary, you can manually update the balloon policy on a host without having to change the status. See Section 8.2.9, "Updating the MoM Policy on Hosts in a Cluster" . It is important to understand that in some scenarios ballooning may collide with KSM. In such cases MoM will try to adjust the balloon size to minimize collisions. Additionally, in some scenarios ballooning may cause sub-optimal performance for a virtual machine. Administrators are advised to use ballooning optimization with caution. KSM control Selecting the Enable KSM check box enables MoM to run Kernel Same-page Merging (KSM) when necessary and when it can yield a memory saving benefit that outweighs its CPU cost. 8.2.4. Migration Policy Settings Explained A migration policy defines the conditions for live migrating virtual machines in the event of host failure. These conditions include the downtime of the virtual machine during migration, network bandwidth, and how the virtual machines are prioritized. Table 8.3. Migration Policies Explained Policy Description Legacy Legacy behavior of 3.6 version. Overrides in vdsm.conf are still applied. The guest agent hook mechanism is disabled. Minimal downtime A policy that lets virtual machines migrate in typical situations. Virtual machines should not experience any significant downtime. The migration will be aborted if the virtual machine migration does not converge after a long time (dependent on QEMU iterations, with a maximum of 500 milliseconds). The guest agent hook mechanism is enabled. Post-copy migration Virtual machines should not experience any significant downtime similar to the minimal downtime policy. The post-copy policy first tries pre-copy to verify whether convergence may occur. The migration switches to post-copy if the virtual machine migration does not converge after a long time. The disadvantage of this policy is that in the post-copy phase, the virtual machine may slow down significantly as the missing parts of memory are transferred between the hosts. If anything goes wrong during the post-copy phase, such as a network failure of the migration network between the hosts, then the migration process leads to an inconsistent and paused VM and the result is a lost VM. Therefore, it is not possible to abort a migration during the post-copy phase. Warning If the network connection breaks prior to the completion of the post-copy process, the Manager pauses and then kills the running VM. Do not use post-copy migration if the VM availability is critical or if the migration network is unstable. Suspend workload if needed A policy that lets virtual machines migrate in most situations, including virtual machines running heavy workloads. Because of this, virtual machines may experience a more significant downtime than with some of the other settings. The migration may still be aborted for extreme workloads. The guest agent hook mechanism is enabled. The bandwidth settings define the maximum bandwidth of both outgoing and incoming migrations per host. Table 8.4. Bandwidth Explained Policy Description Auto Bandwidth is copied from the Rate Limit [Mbps] setting in the data center Host Network QoS . If the rate limit has not been defined, it is computed as a minimum of link speeds of sending and receiving network interfaces. If rate limit has not been set, and link speeds are not available, it is determined by local VDSM setting on sending host. Hypervisor default Bandwidth is controlled by local VDSM setting on sending Host. Custom Defined by user (in Mbps). This value is divided by the number of concurrent migrations (default is 2, to account for ingoing and outgoing migration). Therefore, the user-defined bandwidth must be large enough to accommodate all concurrent migrations. For example, if the Custom bandwidth is defined as 600 Mbps, a virtual machine migration's maximum bandwidth is actually 300 Mbps. The resilience policy defines how the virtual machines are prioritized in the migration. Table 8.5. Resilience Policy Settings Field Description/Action Migrate Virtual Machines Migrates all virtual machines in order of their defined priority. Migrate only Highly Available Virtual Machines Migrates only highly available virtual machines to prevent overloading other hosts. Do Not Migrate Virtual Machines Prevents virtual machines from being migrated. The Additional Properties are only applicable to the Legacy migration policy. Table 8.6. Additional Properties Explained Property Description Auto Converge migrations Allows you to set whether auto-convergence is used during live migration of virtual machines. Large virtual machines with high workloads can dirty memory more quickly than the transfer rate achieved during live migration, and prevent the migration from converging. Auto-convergence capabilities in QEMU allow you to force convergence of virtual machine migrations. QEMU automatically detects a lack of convergence and triggers a throttle-down of the vCPUs on the virtual machine. Auto-convergence is disabled globally by default. Select Inherit from global setting to use the auto-convergence setting that is set at the global level. This option is selected by default. Select Auto Converge to override the global setting and allow auto-convergence for the virtual machine. Select Don't Auto Converge to override the global setting and prevent auto-convergence for the virtual machine. Enable migration compression Allows you to set whether migration compression is used during live migration of the virtual machine. This feature uses Xor Binary Zero Run-Length-Encoding to reduce virtual machine downtime and total live migration time for virtual machines running memory write-intensive workloads or for any application with a sparse memory update pattern. Migration compression is disabled globally by default. Select Inherit from global setting to use the compression setting that is set at the global level. This option is selected by default. Select Compress to override the global setting and allow compression for the virtual machine. Select Don't compress to override the global setting and prevent compression for the virtual machine. 8.2.5. Scheduling Policy Settings Explained Scheduling policies allow you to specify the usage and distribution of virtual machines between available hosts. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. Regardless of the scheduling policy, a virtual machine will not start on a host with an overloaded CPU. By default, a host's CPU is considered overloaded if it has a load of more than 80% for 5 minutes, but these values can be changed using scheduling policies. See Section 1.3, "Scheduling Policies" for more information about scheduling policies. Table 8.7. Scheduling Policy Tab Properties Field Description/Action Select Policy Select a policy from the drop-down list. none : Set the policy value to none to have no load-balancing or power-sharing between hosts for already-running virtual machines. This is the default mode.When a virtual machine is started, the memory and CPU processing load is spread evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes , HighUtilization , or MaxFreeMemoryForOverUtilized . evenly_distributed : Distributes the memory and CPU processing load evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined CpuOverCommitDurationMinutes , HighUtilization , or MaxFreeMemoryForOverUtilized . cluster_maintenance : Limits activity in a cluster during maintenance tasks. No new virtual machines may be started, except highly available virtual machines. If host failure occurs, highly available virtual machines will restart properly and any virtual machine can migrate. power_saving : Distributes the memory and CPU processing load across a subset of available hosts to reduce power consumption on underutilized hosts. Hosts with a CPU load below the low utilization value for longer than the defined time interval will migrate all virtual machines to other hosts so that it can be powered down. Additional virtual machines attached to a host will not start if that host has reached the defined high utilization value. vm_evenly_distributed : Distributes virtual machines evenly between hosts based on a count of the virtual machines. The cluster is considered unbalanced if any host is running more virtual machines than the HighVmCount and there is at least one host with a virtual machine count that falls outside of the MigrationThreshold . Properties The following properties appear depending on the selected policy, and can be edited if necessary: HighVmCount : Sets the minimum number of virtual machines that must be running per host to enable load balancing. The default value is 10 running virtual machines on one host. Load balancing is only enabled when there is at least one host in the cluster that has at least HighVmCount running virtual machines. MigrationThreshold : Defines a buffer before virtual machines are migrated from the host. It is the maximum inclusive difference in virtual machine count between the most highly-utilized host and the least-utilized host. The cluster is balanced when every host in the cluster has a virtual machine count that falls inside the migration threshold. The default value is 5 . SpmVmGrace : Defines the number of slots for virtual machines to be reserved on SPM hosts. The SPM host will have a lower load than other hosts, so this variable defines how many fewer virtual machines the SPM host can run in comparison to other hosts. The default value is 5 . CpuOverCommitDurationMinutes : Sets the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action. The defined time interval protects against temporary spikes in CPU load activating scheduling policies and instigating unnecessary virtual machine migration. Maximum two characters. The default value is 2 . HighUtilization : Expressed as a percentage. If the host runs with CPU usage at or above the high utilization value for the defined time interval, the Red Hat Virtualization Manager migrates virtual machines to other hosts in the cluster until the host's CPU load is below the maximum service threshold. The default value is 80 . LowUtilization : Expressed as a percentage. If the host runs with CPU usage below the low utilization value for the defined time interval, the Red Hat Virtualization Manager will migrate virtual machines to other hosts in the cluster. The Manager will power down the original host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. The default value is 20 . ScaleDown : Reduces the impact of the HA Reservation weight function, by dividing a host's score by the specified amount. This is an optional property that can be added to any policy, including none . HostsInReserve : Specifies a number of hosts to keep running even though there are no running virtual machines on them. This is an optional property that can be added to the power_saving policy. EnableAutomaticHostPowerManagement : Enables automatic power management for all hosts in the cluster. This is an optional property that can be added to the power_saving policy. The default value is true . MaxFreeMemoryForOverUtilized : Sets the minimum free memory required in MB for the minimum service level. If the host's available memory runs at, or below this value, the Red Hat Virtualization Manager migrates virtual machines to other hosts in the cluster while the host's available memory is below the minimum service threshold. Setting both MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized to 0 MB disables memory based balancing. If MaxFreeMemoryForOverUtilized is set, MinFreeMemoryForUnderUtilized must also be set to avoid unexpected behavior. This is an optional property that can be added to the power_saving and evenly_distributed policies. MinFreeMemoryForUnderUtilized : Sets the minimum free memory required in MB before the host is considered underutilized. If the host's available memory runs above this value, the Red Hat Virtualization Manager migrates virtual machines to other hosts in the cluster and will automatically power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. Setting both MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized to 0MB disables memory based balancing. If MinFreeMemoryForUnderUtilized is set, MaxFreeMemoryForOverUtilized must also be set to avoid unexpected behavior. This is an optional property that can be added to the power_saving and evenly_distributed policies. HeSparesCount : Sets the number of additional self-hosted engine nodes that must reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. Other virtual machines are prevented from starting on a self-hosted engine node if doing so would not leave enough free memory for the Manager virtual machine. This is an optional property that can be added to the power_saving , vm_evenly_distributed , and evenly_distributed policies. The default value is 0 . Scheduler Optimization Optimize scheduling for host weighing/ordering. Optimize for Utilization : Includes weight modules in scheduling to allow best selection. Optimize for Speed : Skips host weighting in cases where there are more than ten pending requests. Enable Trusted Service Enable integration with an OpenAttestation server. Before this can be enabled, use the engine-config tool to enter the OpenAttestation server's details. For more information, see Section 12.9, "Trusted Compute Pools" . Enable HA Reservation Enable the Manager to monitor cluster capacity for highly available virtual machines. The Manager ensures that appropriate capacity exists within a cluster for virtual machines designated as highly available to migrate in the event that their existing host fails unexpectedly. Provide custom serial number policy This check box allows you to specify a serial number policy for the virtual machines in the cluster. Select one of the following options: Host ID : Sets the host's UUID as the virtual machine's serial number. Vm ID : Sets the virtual machine's UUID as its serial number. Custom serial number : Allows you to specify a custom serial number. When a host's free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580 are logged to /var/log/vdsm/mom.log . /var/log/vdsm/mom.log is the Memory Overcommit Manager log file. 8.2.6. Cluster Console Settings Explained The table below describes the settings for the Console tab in the New Cluster and Edit Cluster windows. Table 8.8. Console Settings Field Description/Action Define SPICE Proxy for Cluster Select this check box to enable overriding the SPICE proxy defined in global configuration. This feature is useful in a case where the user (who is, for example, connecting via the VM Portal) is outside of the network where the hypervisors reside. Overridden SPICE proxy address The proxy by which the SPICE client connects to virtual machines. The address must be in the following format: 8.2.7. Fencing Policy Settings Explained The table below describes the settings for the Fencing Policy tab in the New Cluster and Edit Cluster windows. Table 8.9. Fencing Policy Settings Field Description/Action Enable fencing Enables fencing on the cluster. Fencing is enabled by default, but can be disabled if required; for example, if temporary network issues are occurring or expected, administrators can disable fencing until diagnostics or maintenance activities are completed. Note that if fencing is disabled, highly available virtual machines running on non-responsive hosts will not be restarted elsewhere. Skip fencing if host has live lease on storage If this check box is selected, any hosts in the cluster that are Non Responsive and still connected to storage will not be fenced. Skip fencing on cluster connectivity issues If this check box is selected, fencing will be temporarily disabled if the percentage of hosts in the cluster that are experiencing connectivity issues is greater than or equal to the defined Threshold . The Threshold value is selected from the drop-down list; available values are 25 , 50 , 75 , and 100 . Skip fencing if gluster bricks are up This option is only available when Red Hat Gluster Storage functionality is enabled. If this check box is selected, fencing is skipped if bricks are running and can be reached from other peers. See Chapter 2. Configure High Availability using Fencing Policies and Appendix A. Fencing Policies for Red Hat Gluster Storage in Maintaining Red Hat Hyperconverged Infrastructure for more information. Skip fencing if gluster quorum not met This option is only available when Red Hat Gluster Storage functionality is enabled. If this check box is selected, fencing is skipped if bricks are running and shutting down the host will cause loss of quorum. See Chapter 2. Configure High Availability using Fencing Policies and Appendix A. Fencing Policies for Red Hat Gluster Storage in Maintaining Red Hat Hyperconverged Infrastructure for more information. 8.2.8. Setting Load and Power Management Policies for Hosts in a Cluster The evenly_distributed and power_saving scheduling policies allow you to specify acceptable memory and CPU usage values, and the point at which virtual machines must be migrated to or from a host. The vm_evenly_distributed scheduling policy distributes virtual machines evenly between hosts based on a count of the virtual machines. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. For a detailed explanation of each scheduling policy, see Section 8.2.5, "Scheduling Policy Settings Explained" . Setting Load and Power Management Policies for Hosts Click Compute Clusters and select a cluster. Click Edit . Click the Scheduling Policy tab. Select one of the following policies: none vm_evenly_distributed Set the minimum number of virtual machines that must be running on at least one host to enable load balancing in the HighVmCount field. Define the maximum acceptable difference between the number of virtual machines on the most highly-utilized host and the number of virtual machines on the least-utilized host in the MigrationThreshold field. Define the number of slots for virtual machines to be reserved on SPM hosts in the SpmVmGrace field. Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. See Section 15.3, "Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts" for more information. evenly_distributed Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field. Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field. Enter the minimum required free memory in MB above which virtual machines start migrating to other hosts in the MinFreeMemoryForUnderUtilized . Enter the maximum required free memory in MB below which virtual machines start migrating to other hosts in the MaxFreeMemoryForOverUtilized . Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. See Section 15.3, "Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts" for more information. power_saving Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field. Enter the CPU utilization percentage below which the host will be considered under-utilized in the LowUtilization field. Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field. Enter the minimum required free memory in MB above which virtual machines start migrating to other hosts in the MinFreeMemoryForUnderUtilized . Enter the maximum required free memory in MB below which virtual machines start migrating to other hosts in the MaxFreeMemoryForOverUtilized . Optionally, in the HeSparesCount field, enter the number of additional self-hosted engine nodes on which to reserve enough free memory to start the Manager virtual machine if it migrates or shuts down. See Section 15.3, "Configuring Memory Slots Reserved for the Self-Hosted Engine on Additional Hosts" for more information. Choose one of the following as the Scheduler Optimization for the cluster: Select Optimize for Utilization to include weight modules in scheduling to allow best selection. Select Optimize for Speed to skip host weighting in cases where there are more than ten pending requests. If you are using an OpenAttestation server to verify your hosts, and have set up the server's details using the engine-config tool, select the Enable Trusted Service check box. Optionally select the Enable HA Reservation check box to enable the Manager to monitor cluster capacity for highly available virtual machines. Optionally select the Provide custom serial number policy check box to specify a serial number policy for the virtual machines in the cluster, and then select one of the following options: Select Host ID to set the host's UUID as the virtual machine's serial number. Select Vm ID to set the virtual machine's UUID as its serial number. Select Custom serial number , and then specify a custom serial number in the text field. Click OK . 8.2.9. Updating the MoM Policy on Hosts in a Cluster The Memory Overcommit Manager handles memory balloon and KSM functions on a host. Changes to these functions at the cluster level are only passed to hosts the time a host moves to a status of Up after being rebooted or in maintenance mode. However, if necessary you can apply important changes to a host immediately by synchronizing the MoM policy while the host is Up . The following procedure must be performed on each host individually. Synchronizing MoM Policy on a Host Click Compute Clusters . Click the cluster's name to open the details view. Click the Hosts tab and select the host that requires an updated MoM policy. Click Sync MoM Policy . The MoM policy on the host is updated without having to move the host to maintenance mode and back Up . 8.2.10. Creating a CPU Profile CPU profiles define the maximum amount of processing capability a virtual machine in a cluster can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are created based on CPU profiles defined under data centers, and are not automatically applied to all virtual machines in a cluster; they must be manually assigned to individual virtual machines for the profile to take effect. This procedure assumes you have already defined one or more CPU quality of service entries under the data center to which the cluster belongs. Creating a CPU Profile Click Compute Clusters . Click the cluster's name to open the details view. Click the CPU Profiles tab. Click New . Enter a Name and a Description for the CPU profile. Select the quality of service to apply to the CPU profile from the QoS list. Click OK . 8.2.11. Removing a CPU Profile Remove an existing CPU profile from your Red Hat Virtualization environment. Removing a CPU Profile Click Compute Clusters . Click the cluster's name to open the details view. Click the CPU Profiles tab and select the CPU profile to remove. Click Remove . Click OK . If the CPU profile was assigned to any virtual machines, those virtual machines are automatically assigned the default CPU profile. 8.2.12. Importing an Existing Red Hat Gluster Storage Cluster You can import a Red Hat Gluster Storage cluster and all hosts belonging to the cluster into Red Hat Virtualization Manager. When you provide details such as the IP address or host name and password of any host in the cluster, the gluster peer status command is executed on that host through SSH, then displays a list of hosts that are a part of the cluster. You must manually verify the fingerprint of each host and provide passwords for them. You will not be able to import the cluster if one of the hosts in the cluster is down or unreachable. As the newly imported hosts do not have VDSM installed, the bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them. Importing an Existing Red Hat Gluster Storage Cluster to the Red Hat Virtualization Manager Click Compute Clusters . Click New . Select the Data Center the cluster will belong to. Enter the Name and Description of the cluster. Select the Enable Gluster Service check box and the Import existing gluster configuration check box. The Import existing gluster configuration field is only displayed if the Enable Gluster Service is selected. In the Hostname field, enter the host name or IP address of any server in the cluster. The host SSH Fingerprint displays to ensure you are connecting with the correct host. If a host is unreachable or if there is a network error, an error Error in fetching fingerprint displays in the Fingerprint field. Enter the Password for the server, and click OK . The Add Hosts window opens, and a list of hosts that are a part of the cluster displays. For each host, enter the Name and the Root Password . If you wish to use the same password for all hosts, select the Use a Common Password check box to enter the password in the provided text field. Click Apply to set the entered password all hosts. Verify that the fingerprints are valid and submit your changes by clicking OK . The bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them. You have now successfully imported an existing Red Hat Gluster Storage cluster into Red Hat Virtualization Manager. 8.2.13. Explanation of Settings in the Add Hosts Window The Add Hosts window allows you to specify the details of the hosts imported as part of a Gluster-enabled cluster. This window appears after you have selected the Enable Gluster Service check box in the New Cluster window and provided the necessary host details. Table 8.10. Add Gluster Hosts Settings Field Description Use a common password Tick this check box to use the same password for all hosts belonging to the cluster. Enter the password in the Password field, then click the Apply button to set the password on all hosts. Name Enter the name of the host. Hostname/IP This field is automatically populated with the fully qualified domain name or IP of the host you provided in the New Cluster window. Root Password Enter a password in this field to use a different root password for each host. This field overrides the common password provided for all hosts in the cluster. Fingerprint The host fingerprint is displayed to ensure you are connecting with the correct host. This field is automatically populated with the fingerprint of the host you provided in the New Cluster window. 8.2.14. Removing a Cluster Move all hosts out of a cluster before removing it. Note You cannot remove the Default cluster, as it holds the Blank template. You can, however, rename the Default cluster and add it to a new data center. Removing a Cluster Click Compute Clusters and select a cluster. Ensure there are no hosts in the cluster. Click Remove . Click OK 8.2.15. Memory Optimization To increase the number of virtual machines on a host, you can use memory overcommitment , in which the memory you assign to virtual machines exceeds RAM and relies on swap space. However, there are potential problems with memory overcommitment: Swapping performance - Swap space is slower and consumes more CPU resources than RAM, impacting virtual machine performance. Excessive swapping can lead to CPU thrashing. Out-of-memory (OOM) killer - If the host runs out of swap space, new processes cannot start, and the kernel's OOM killer daemon begins shutting down active processes such as virtual machine guests. To help overcome these shortcomings, you can do the following: Limit memory overcommitment using the Memory Optimization setting and the Memory Overcommit Manager (MoM) . Make the swap space large enough to accommodate the maximum potential demand for virtual memory and have a safety margin remaining. Reduce virtual memory size by enabling memory ballooning and Kernel Same-page Merging (KSM) . 8.2.15.1. Memory Optimization and Memory Overcommitment You can limit the amount of memory overcommitment by selecting one of the Memory Optimization settings: None (0%), 150% , or 200% . Each setting represents a percentage of RAM. For example, with a host that has 64 GB RAM, selecting 150% means you can overcommit memory by an additional 32 GB, for a total of 96 GB in virtual memory. If the host uses 4 GB of that total, the remaining 92 GB are available. You can assign most of that to the virtual machines ( Memory Size on the System tab), but consider leaving some of it unassigned as a safety margin. Sudden spikes in demand for virtual memory can impact performance before the MoM, memory ballooning, and KSM have time to re-optimize virtual memory. To reduce that impact, select a limit that is appropriate for the kinds of applications and workloads you are running: For workloads that produce more incremental growth in demand for memory, select a higher percentage, such as 200% or 150% . For more critical applications or workloads that produce more sudden increases in demand for memory, select a lower percentage, such as 150% or None (0%). Selecting None helps prevent memory overcommitment but allows the MoM, memory balloon devices, and KSM to continue optimizing virtual memory. Important Always test your Memory Optimization settings by stress testing under a wide range of conditions before deploying the configuration to production. To configure the Memory Optimization setting, click the Optimization tab in the New Cluster or Edit Cluster windows. See Section 8.2.3, "Optimization Settings Explained" . Additional comments: The Host Statistics views display useful historical information for sizing the overcommitment ratio. The actual memory available cannot be determined in real time because the amount of memory optimization achieved by KSM and memory ballooning changes continuously. When virtual machines reach the virtual memory limit, new apps cannot start. When you plan the number of virtual machines to run on a host, use the maximum virtual memory (physical memory size and the Memory Optimization setting) as a starting point. Do not factor in the smaller virtual memory achieved by memory optimizations such as memory ballooning and KSM. 8.2.15.2. Swap Space and Memory Overcommitment Red Hat provides these recommendations for configuring swap space . When applying these recommendations, follow the guidance to size the swap space as "last effort memory" for a worst-case scenario. Use the physical memory size and Memory Optimization setting as a basis for estimating the total virtual memory size. Exclude any reduction of the virtual memory size from optimization by the MoM, memory ballooning, and KSM. Important To help prevent an OOM condition, make the swap space large enough to handle a worst-case scenario and still have a safety margin available. Always stress-test your configuration under a wide range of conditions before deploying it to production. 8.2.15.3. The Memory Overcommit Manager (MoM) The Memory Overcommit Manager (MoM) does two things: It limits memory overcommitment by applying the Memory Optimization setting to the hosts in a cluster, as described in the preceding section. It optimizes memory by managing the memory ballooning and KSM , as described in the following sections. You do not need to enable or disable MoM. When a host's free memory drops below 20%, ballooning commands like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580 are logged to /var/log/vdsm/mom.log , the Memory Overcommit Manager log file. 8.2.15.4. Memory Ballooning Virtual machines start with the full amount of virtual memory you have assigned to them. As virtual memory usage exceeds RAM, the host relies more on swap space. If enabled, memory ballooning lets virtual machines give up the unused portion of that memory. The freed memory can be reused by other processes and virtual machines on the host. The reduced memory footprint makes swapping less likely and improves performance. The virtio-balloon package that provides the memory balloon device and drivers ships as a loadable kernel module (LKM). By default, it is configured to load automatically. Blacklisting the module or unloading it disables ballooning. The memory balloon devices do not coordinate directly with each other; they rely on the host's Memory Overcommit Manager (MoM) process to continuously monitor each virtual machine needs and instruct the balloon device to increase or decrease virtual memory. Performance considerations: Red Hat does not recommend memory ballooning and overcommitment for workloads that require continuous high-performance and low latency. See Configuring High-Performance Virtual Machines, Templates, and Pools . Red Hat recommends memory ballooning when increasing virtual machine density (economy) is more important than performance. Memory ballooning does not have a significant impact on CPU utilization. (KSM consumes some CPU resources, but consumption remains consistent under pressure.) To enable memory ballooning, click the Optimization tab in the New Cluster or Edit Cluster windows. Then select the Enable Memory Balloon Optimization checkbox. This setting enables memory overcommitment on virtual machines running on the hosts in this cluster. When this check box is selected, the MoM starts ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine. See Section 8.2.3, "Optimization Settings Explained" . Each host in this cluster receives a balloon policy update when its status changes to Up. If necessary, you can manually update the balloon policy on a host without having to change the status. See Section 8.2.9, "Updating the MoM Policy on Hosts in a Cluster" . 8.2.15.5. Kernel Same-page Merging (KSM) When a virtual machine runs, it often creates duplicate memory pages for items such as common libraries and high-use data. Furthermore, virtual machines that run similar guest operating systems and applications produce duplicate memory pages in virtual memory. When enabled, Kernel Same-page Merging (KSM) examines the virtual memory on a host, eliminates duplicate memory pages, and shares the remaining memory pages across multiple applications and virtual machines. These shared memory pages are marked copy-on-write; if a virtual machine needs to write changes to the page, it makes a copy first before writing its modifications to that copy. While KSM is enabled, the MoM manages KSM. You do not need to configure or control KSM manually. KSM increases virtual memory performance in two ways. Because a shared memory page is used more frequently, the host is more likely to the store it in cache or main memory, which improves the memory access speed. Additionally, with memory overcommitment, KSM reduces the virtual memory footprint, reducing the likelihood of swapping and improving performance. KSM consumes more CPU resources than memory ballooning. The amount of CPU KSM consumes remains consistent under pressure. Running identical virtual machines and applications on a host provides KSM with more opportunities to merge memory pages than running dissimilar ones. If you run mostly dissimilar virtual machines and applications, the CPU cost of using KSM may offset its benefits. Performance considerations: After the KSM daemon merges large amounts of memory, the kernel memory accounting statistics may eventually contradict each other. If your system has a large amount of free memory, you might improve performance by disabling KSM. Red Hat does not recommend KSM and overcommitment for workloads that require continuous high-performance and low latency. See Configuring High-Performance Virtual Machines, Templates, and Pools . Red Hat recommends KSM when increasing virtual machine density (economy) is more important than performance. To enable KSM, click the Optimization tab in the New Cluster or Edit Cluster windows. Then select the Enable KSM checkbox. This setting enables MoM to run KSM when necessary and when it can yield a memory saving benefit that outweighs its CPU cost. See Section 8.2.3, "Optimization Settings Explained" . 8.2.16. Changing the Cluster Compatibility Version Red Hat Virtualization clusters have a compatibility version. The cluster compatibility version indicates the features of Red Hat Virtualization supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster. Important To change the cluster compatibility version, you must have first updated all the hosts in your cluster to a level that supports your desired compatibility level. Check if there is an icon to the host indicating an update is available. Procedure In the Administration Portal, click Compute Clusters . Select the cluster to change and click Edit . On the General tab, change the Compatibility Version to the desired value. Click OK . The Change Cluster Compatibility Version confirmation dialog opens. Click OK to confirm. Important An error message might warn that some virtual machines and templates are incorrectly configured. To fix this error, edit each virtual machine manually. The Edit Virtual Machine window provides additional validations and warnings that show what to correct. Sometimes the issue is automatically corrected and the virtual machine's configuration just needs to be saved again. After editing each virtual machine, you will be able to change the cluster compatibility version. After updating a cluster's compatibility version, you must update the cluster compatibility version of all running or suspended virtual machines by rebooting them from the Administration Portal, or using the REST API, instead of from within the guest operating system. Virtual machines that require a reboot are marked with the pending changes icon ( ). You cannot change the cluster compatibility version of a virtual machine snapshot that is in preview. You must first commit or undo the preview. In a self-hosted engine environment, the Manager virtual machine does not need to be restarted. Although you can wait to reboot the virtual machines at a convenient time, rebooting immediately is highly recommended so that the virtual machines use the latest configuration. Virtual machines that have not been updated run with the old configuration, and the new configuration could be overwritten if other changes are made to the virtual machine before the reboot. Once you have updated the compatibility version of all clusters and virtual machines in a data center, you can then change the compatibility version of the data center itself. | [
"protocol://[host]:[port]"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-Cluster_Tasks |
Chapter 1. Interactively selecting a system-wide Red Hat build of OpenJDK version on RHEL | Chapter 1. Interactively selecting a system-wide Red Hat build of OpenJDK version on RHEL If you have multiple versions of Red Hat build of OpenJDK installed on RHEL, you can interactively select the default Red Hat build of OpenJDK version to use system-wide. Note If you do not have root privileges, you can select an Red Hat build of OpenJDK version by configuring the JAVA_HOME environment variable. Prerequisites You must have root privileges on the system. Multiple versions of Red Hat build of OpenJDK were installed using the yum package manager. Procedure View the Red Hat build of OpenJDK versions installed on the system. USD yum list installed "java*" A list of installed Java packages appears. Display the Red Hat build of OpenJDK versions that can be used for a specific java command and select the one to use: The current system-wide Red Hat build of OpenJDK version is marked with an asterisk. The current Red Hat build of OpenJDK version for the specified java command is marked with a plus sign. Press Enter to keep the current selection or enter the Selection number of the Red Hat build of OpenJDK version you want to select followed by the Enter key. The default Red Hat build of OpenJDK version for the system is the selected version. Verify that the chosen binary is selected. Note This procedure configures the java command. Then javac command can be set up in a similar way, but it operates independently. If you have Red Hat build of OpenJDK installed, alternatives provides more possible selections. In particular, the javac master alternative switches many binaries provided by the -devel sub-package. Even if you have Red Hat build of OpenJDK installed, java (and other JRE masters) and javac (and other Red Hat build of OpenJDK masters) still operate separately, so you can have different selections for JRE and JDK. The alternatives --config java command affects the jre and its associated slaves. If you want to change the Java , use the javac alternatives command. The --config javac utility configures the SDK and related slaves. To see all possible masters, use alternatives --list and check all of the java , javac , jre , and sdk masters. | [
"Installed Packages java-1.8.0-openjdk.x86_64 1:1.8.0.242.b08-0.el8_1 @rhel-8-appstream-rpms java-1.8.0-openjdk-headless.x86_64 1:1.8.0.242.b08-0.el8_1 @rhel-8-appstream-rpms java-11-openjdk.x86_64 1:11.0.6.10-0.el8_1 @rhel-8-appstream-rpms java-11-openjdk-headless.x86_64 1:11.0.6.10-0.el8_1 @rhel-8-appstream-rpms javapackages-filesystem.noarch 5.3.0-1.module+el8+2447+6f56d9a6 @rhel-8-appstream-rpms",
"sudo alternatives --config java There are 2 programs which provide 'java'. Selection Command ----------------------------------------------- *+ 1 java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.242.b08-0.el8_1.x86_64/jre/bin/java) 2 java-11-openjdk.x86_64 (/usr/lib/jvm/java-11-openjdk-11.0.6.10-0.el8_1.x86_64/bin/java) Enter to keep the current selection[+], or type selection number: 1",
"java -version openjdk version \"1.8.0_242\" OpenJDK Runtime Environment (build 1.8.0_242-b08) OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/configuring_red_hat_build_of_openjdk_8_for_rhel/interactively-selecting-systemwide-openjdk8-version-on-rhel |
8.185. rpmlint | 8.185. rpmlint 8.185.1. RHBA-2013:0948 - rpmlint bug fix update Updated rpmlint packages that fix two bugs are now available. Rpmlint is a tool for checking common errors in rpm packages. It can be used to test individual packages and specific files before uploading or to check an entire distribution. By default all applicable checks are processed but specific checks can be performed by using command line parameters. Bug Fixes BZ# 663082 Previously, an incorrect rule in rpmlint caused it to report a "missing-lsb-keyword Default-Stop" error message for services that did not start by default in any run level. This update corrects the rule in rpmlint, and error messages no longer occur in the described scenario. BZ# 958038 When a package had a UTF-8 encoding error in the description, or the description was encoded in a different character set, then rpmlint terminated unexpectedly with a segmentation fault. With this update, rmplint returns an error message that the description has incorrectly encoded UTF-8 data (tag-not-utf8 error), and crashes no longer occur in the described scenario. Users of rpmlint are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/rpmlint |
Chapter 358. Undertow Component | Chapter 358. Undertow Component Available as of Camel version 2.16 The undertow component provides HTTP and WebSocket based endpoints for consuming and producing HTTP/WebSocket requests. That is, the Undertow component behaves as a simple Web server. Undertow can also be used as a http client which mean you can also use it with Camel as a producer. Tip Since Camel version 2.21, the undertow component also supports WebSocket connections and can thus serve as a drop-in replacement for Camel websocket component or atmosphere-websocket component. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-undertow</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 358.1. URI format undertow:http://hostname[:port][/resourceUri][?options] undertow:https://hostname[:port][/resourceUri][?options] undertow:ws://hostname[:port][/resourceUri][?options] undertow:wss://hostname[:port][/resourceUri][?options] You can append query options to the URI in the following format, ?option=value&option=value&... 358.2. Options The Undertow component supports 5 options which are listed below. Name Description Default Type undertowHttpBinding (advanced) To use a custom HttpBinding to control the mapping between Camel message and HttpClient. UndertowHttpBinding sslContextParameters (security) To configure security using SSLContextParameters SSLContextParameters useGlobalSslContext Parameters (security) Enable usage of global SSL context parameters. false boolean hostOptions (advanced) To configure common options, such as thread pools UndertowHostOptions resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Undertow endpoint is configured using URI syntax: with the following path and query parameters: 358.2.1. Path Parameters (1 parameters): Name Description Default Type httpURI Required The url of the HTTP endpoint to use. URI 358.2.2. Query Parameters (22 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean httpMethodRestrict (consumer) Used to only allow consuming if the HttpMethod matches, such as GET/POST/PUT etc. Multiple methods can be specified separated by comma. String matchOnUriPrefix (consumer) Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found. false Boolean optionsEnabled (consumer) Specifies whether to enable HTTP OPTIONS for this Servlet consumer. By default OPTIONS is turned off. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern handlers (consumer) Specifies a comma-delimited set of io.undertow.server.HttpHandler instances to lookup in your Registry. These handlers are added to the Undertow handler chain, for example, to add security. Note You can not use different handlers with different Undertow endpoints using the same port number. The handlers is associated to the port number. If you need different handlers, then use different port numbers. String cookieHandler (producer) Configure a cookie handler to maintain a HTTP session CookieHandler keepAlive (producer) Setting to ensure socket is not closed due to inactivity true Boolean options (producer) Sets additional channel options. The options that can be used are defined in org.xnio.Options. To configure from endpoint uri, then prefix each option with option., such as option.close-abort=true&option.send-buffer=8192 Map reuseAddresses (producer) Setting to facilitate socket multiplexing true Boolean tcpNoDelay (producer) Setting to improve TCP protocol performance true Boolean throwExceptionOnFailure (producer) Option to disable throwing the HttpOperationFailedException in case of failed responses from the remote server. This allows you to get all responses regardless of the HTTP status code. true Boolean transferException (producer) If enabled and an Exchange failed processing on the consumer side and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false Boolean headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean undertowHttpBinding (advanced) To use a custom UndertowHttpBinding to control the mapping between Camel message and undertow. UndertowHttpBinding fireWebSocketChannelEvents (websocket) if true, the consumer will post notifications to the route when a new WebSocket peer connects, disconnects, etc. See UndertowConstants.EVENT_TYPE and EventType. false boolean sendTimeout (websocket) Timeout in milliseconds when sending to a websocket channel. The default timeout is 30000 (30 seconds). 30000 Integer sendToAll (websocket) To send to all websocket subscribers. Can be used to configure on endpoint level, instead of having to use the UndertowConstants.SEND_TO_ALL header on the message. Boolean useStreaming (websocket) if true, text and binary messages coming through a WebSocket will be wrapped as java.io.Reader and java.io.InputStream respectively before they are passed to an Exchange; otherwise they will be passed as String and byte respectively. false boolean sslContextParameters (security) To configure security using SSLContextParameters SSLContextParameters 358.3. Spring Boot Auto-Configuration The component supports 10 options, which are listed below. Name Description Default Type camel.component.undertow.enabled Enable undertow component true Boolean camel.component.undertow.host-options.buffer-size The buffer size of the Undertow host. Integer camel.component.undertow.host-options.direct-buffers Set if the Undertow host should use direct buffers. Boolean camel.component.undertow.host-options.http2-enabled Set if the Undertow host should use http2 protocol. Boolean camel.component.undertow.host-options.io-threads The number of io threads to use in a Undertow host. Integer camel.component.undertow.host-options.worker-threads The number of worker threads to use in a Undertow host. Integer camel.component.undertow.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.undertow.ssl-context-parameters To configure security using SSLContextParameters. The option is a org.apache.camel.util.jsse.SSLContextParameters type. String camel.component.undertow.undertow-http-binding To use a custom HttpBinding to control the mapping between Camel message and HttpClient. The option is a org.apache.camel.component.undertow.UndertowHttpBinding type. String camel.component.undertow.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean 358.4. Message Headers Camel uses the same message headers as the HTTP component. From Camel 2.2, it also uses Exchange.HTTP_CHUNKED,CamelHttpChunked header to turn on or turn off the chuched encoding on the camel-undertow consumer. Camel also populates all request.parameter and request.headers. For example, given a client request with the URL, http://myserver/myserver?orderid=123 , the exchange will contain a header named orderid with the value 123. 358.5. HTTP Producer Example The following is a basic example of how to send an HTTP request to an existing HTTP endpoint. in Java DSL from("direct:start") .to("undertow:http://www.google.com"); or in XML <route> <from uri="direct:start"/> <to uri="undertow:http://www.google.com"/> <route> 358.6. HTTP Consumer Example In this sample we define a route that exposes a HTTP service at http://localhost:8080/myapp/myservice : <route> <from uri="undertow:http://localhost:8080/myapp/myservice"/> <to uri="bean:myBean"/> </route> 358.7. WebSocket Example In this sample we define a route that exposes a WebSocket service at http://localhost:8080/myapp/mysocket and returns back a response to the same channel: <route> <from uri="undertow:ws://localhost:8080/myapp/mysocket"/> <transform><simple>Echo USD{body}</simple></transform> <to uri="undertow:ws://localhost:8080/myapp/mysocket"/> </route> 358.8. HTTP/2 Example In this sample, we configure the camel-undertow component to support the HTTP/2 protocol and expose a HTTP service at http://localhost:7766/foo/bar . The directory structure of this sample is as follows: The following files are important for configuring the camel-undertow component to support the HTTP/2 protocol: 1 pom.xml: Includes the following properties and dependencies: <properties> <camel.version>2.23.1</camel.version> </properties> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-core</artifactId> <version>USD{camel.version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>USD{camel.version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-undertow</artifactId> <version>USD{camel.version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-http-common</artifactId> <version>USD{camel.version}</version> </dependency> <plugin> <groupId>org.apache.camel</groupId> <artifactId>camel-maven-plugin</artifactId> <version>USD{camel.version}</version> <configuration> <fileApplicationContextUri>src/main/resources/META-INF/spring/camel-context.xml</fileApplicationContextUri> </configuration> </plugin> 2 camel-context.xml: Configures the camel-undertow component as follows: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <camelContext id="cbr-example-context" xmlns="http://camel.apache.org/schema/spring"> <route id="cbr-route" trace="true"> <from id="_from1" uri="undertow:http://localhost:7766/foo/bar"/> <setBody id="_setBody1"> <constant>Sending Response</constant> </setBody> <log id="_log5" message="Headers USD{in.headers}"/> <log id="_log5" message="Done processing USD{body}"/> </route> </camelContext> <bean class="org.apache.camel.component.undertow.UndertowComponent" id="undertow"> <property name="hostOptions" ref="undertowHostOptions"/> </bean> <bean class="org.apache.camel.component.undertow.UndertowHostOptions" id="undertowHostOptions"> <property name="http2Enabled" value="true"/> </bean> </beans> Note The http2Enabled property which is set to false by default, is set to true for UndertowHostOptions class. This class is then referred to UndertowComponent which is used in camel route. In this example, by using the camel-maven-plugin in the pom.xml file, we can expose a HTTP service at http://localhost:7766/foo/bar when we execute the Maven command mvn camel:run . We can test this example using the curl command as follows: 358.9. Using localhost as host When you specify localhost in a URL, Camel exposes the endpoint only on the local TCP/IP network interface, so it cannot be accessed from outside the machine it operates on. If you need to expose a Jetty endpoint on a specific network interface, the numerical IP address of this interface should be used as the host. If you need to expose a Jetty endpoint on all network interfaces, the 0.0.0.0 address should be used. To listen across an entire URI prefix, see How do I let Jetty match wildcards . If you actually want to expose routes by HTTP and already have a Servlet, you should instead refer to the Servlet Transport . 358.10. Undertow consumers on WildFly The configuration of camel-undertow consumers on WildFly is different to that of standalone Camel. Producer endpoints work as per normal. On WildFly, camel-undertow consumers leverage the default Undertow HTTP server provided by the container. The server is defined within the undertow subsystem configuration. Here's an excerpt of the default configuration from standalone.xml: <subsystem xmlns="urn:jboss:domain:undertow:4.0"> <buffer-cache name="default" /> <server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true" /> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true" /> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content" /> <filter-ref name="server-header" /> <filter-ref name="x-powered-by-header" /> <http-invoker security-realm="ApplicationRealm" /> </host> </server> </subsystem> In this instance, Undertow is configured to listen on interfaces / ports specified by the http and https socket-binding. By default this is port 8080 for http and 8443 for https. This has the following implications: camel-undertow consumers will only bind to localhost:8080 or localhost:8443 Some endpoint consumer configuration options have no effect (see below), since these settings are managed by the WildFly container For example, if you configure an endpoint consumer using different host or port combinations, a warning will appear within the server log file. For example the following host & port configurations would be ignored: from("undertow:http://somehost:1234/path/to/resource") However, the consumer is still available on the default host & port localhost:8080 or localhost:8443. 358.10.1. Configuring alternative ports If alternative ports are to be accepted, then these must be configured via the WildFly subsystem configuration. This is explained in the server documentation: https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.1/html/configuration_guide/configuring_the_web_server_undertow 358.10.2. Ignored camel-undertow consumer configuration options on WildFly hostOptions Refer to the WildFly undertow configuration guide for how to configure server host options: https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.1/html-single/how_to_configure_server_security/#configure_one_way_and_two_way_ssl_tls_for_application sslContextParameters To configure SSL, refer to the WildFly SSL configuration guide: https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.1/html-single/how_to_configure_server_security/#configure_one_way_and_two_way_ssl_tls_for_application | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-undertow</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"undertow:http://hostname[:port][/resourceUri][?options] undertow:https://hostname[:port][/resourceUri][?options] undertow:ws://hostname[:port][/resourceUri][?options] undertow:wss://hostname[:port][/resourceUri][?options]",
"undertow:httpURI",
"from(\"direct:start\") .to(\"undertow:http://www.google.com\");",
"<route> <from uri=\"direct:start\"/> <to uri=\"undertow:http://www.google.com\"/> <route>",
"<route> <from uri=\"undertow:http://localhost:8080/myapp/myservice\"/> <to uri=\"bean:myBean\"/> </route>",
"<route> <from uri=\"undertow:ws://localhost:8080/myapp/mysocket\"/> <transform><simple>Echo USD{body}</simple></transform> <to uri=\"undertow:ws://localhost:8080/myapp/mysocket\"/> </route>",
"├── pom.xml 1 ├── README.md └── src └── main └── resources └── META-INF └── spring └── camel-context.xml 2",
"<properties> <camel.version>2.23.1</camel.version> </properties> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-core</artifactId> <version>USD{camel.version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>USD{camel.version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-undertow</artifactId> <version>USD{camel.version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-http-common</artifactId> <version>USD{camel.version}</version> </dependency> <plugin> <groupId>org.apache.camel</groupId> <artifactId>camel-maven-plugin</artifactId> <version>USD{camel.version}</version> <configuration> <fileApplicationContextUri>src/main/resources/META-INF/spring/camel-context.xml</fileApplicationContextUri> </configuration> </plugin>",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <camelContext id=\"cbr-example-context\" xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"cbr-route\" trace=\"true\"> <from id=\"_from1\" uri=\"undertow:http://localhost:7766/foo/bar\"/> <setBody id=\"_setBody1\"> <constant>Sending Response</constant> </setBody> <log id=\"_log5\" message=\"Headers USD{in.headers}\"/> <log id=\"_log5\" message=\"Done processing USD{body}\"/> </route> </camelContext> <bean class=\"org.apache.camel.component.undertow.UndertowComponent\" id=\"undertow\"> <property name=\"hostOptions\" ref=\"undertowHostOptions\"/> </bean> <bean class=\"org.apache.camel.component.undertow.UndertowHostOptions\" id=\"undertowHostOptions\"> <property name=\"http2Enabled\" value=\"true\"/> </bean> </beans>",
"curl -v --http2 http://localhost:7766/foo/bar * Trying ::1 * TCP_NODELAY set * connect to ::1 port 7766 failed: Connection refused * Trying 127.0.0.1 * TCP_NODELAY set * Connected to localhost (127.0.0.1) port 7766 (#0) > GET /foo/bar HTTP/1.1 > Host: localhost:7766 > User-Agent: curl/7.53.1 > Accept: */* > Connection: Upgrade, HTTP2-Settings > Upgrade: h2c > HTTP2-Settings: AAMAAABkAARAAAAAAAIAAAAA > < HTTP/1.1 101 Switching Protocols < Connection: Upgrade < Upgrade: h2c < Date: Sun, 10 Dec 2017 08:43:58 GMT * Received 101 * Using HTTP2, server supports multi-use * Connection state changed (HTTP/2 confirmed) * Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0 * Connection state changed (MAX_CONCURRENT_STREAMS updated)! < HTTP/2 200 < accept: */* < http2-settings: AAMAAABkAARAAAAAAAIAAAAA < breadcrumbid: ID-dhcppc1-1512886066149-0-25 < content-length: 16 < user-agent: curl/7.53.1 < date: Sun, 10 Dec 2017 08:43:58 GMT < * Connection #0 to host localhost left intact Sending Response",
"<subsystem xmlns=\"urn:jboss:domain:undertow:4.0\"> <buffer-cache name=\"default\" /> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\" /> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\" /> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\" /> <filter-ref name=\"server-header\" /> <filter-ref name=\"x-powered-by-header\" /> <http-invoker security-realm=\"ApplicationRealm\" /> </host> </server> </subsystem>",
"from(\"undertow:http://somehost:1234/path/to/resource\")",
"[org.wildfly.extension.camel] (pool-2-thread-1) Ignoring configured host: http://somehost:1234/path/to/resource"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/undertow-component |
Chapter 1. Validating an installation | Chapter 1. Validating an installation You can check the status of an OpenShift Container Platform cluster after an installation by following the procedures in this document. 1.1. Reviewing the installation log You can review a summary of an installation in the OpenShift Container Platform installation log. If an installation succeeds, the information required to access the cluster is included in the log. Prerequisites You have access to the installation host. Procedure Review the .openshift_install.log log file in the installation directory on your installation host: USD cat <install_dir>/.openshift_install.log Example output Cluster credentials are included at the end of the log if the installation is successful, as outlined in the following example: ... time="2020-12-03T09:50:47Z" level=info msg="Install complete!" time="2020-12-03T09:50:47Z" level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'" time="2020-12-03T09:50:47Z" level=info msg="Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com" time="2020-12-03T09:50:47Z" level=info msg="Login to the console with user: \"kubeadmin\", and password: \"password\"" time="2020-12-03T09:50:47Z" level=debug msg="Time elapsed per stage:" time="2020-12-03T09:50:47Z" level=debug msg=" Infrastructure: 6m45s" time="2020-12-03T09:50:47Z" level=debug msg="Bootstrap Complete: 11m30s" time="2020-12-03T09:50:47Z" level=debug msg=" Bootstrap Destroy: 1m5s" time="2020-12-03T09:50:47Z" level=debug msg=" Cluster Operators: 17m31s" time="2020-12-03T09:50:47Z" level=info msg="Time elapsed: 37m26s" 1.2. Viewing the image pull source For clusters with unrestricted network connectivity, you can view the source of your pulled images by using a command on a node, such as crictl images . However, for disconnected installations, to view the source of pulled images, you must review the CRI-O logs to locate the Trying to access log entry, as shown in the following procedure. Other methods to view the image pull source, such as the crictl images command, show the non-mirrored image name, even though the image is pulled from the mirrored location. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Review the CRI-O logs for a master or worker node: USD oc adm node-logs <node_name> -u crio Example output The Trying to access log entry indicates where the image is being pulled from. ... Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time="2021-08-05 10:33:21.594930907Z" level=info msg="Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.194341109Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time="2021-03-17 02:52:50.226788351Z" level=info msg="Trying to access \"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\"" ... The log might show the image pull source twice, as shown in the preceding example. If your ImageContentSourcePolicy object lists multiple mirrors, OpenShift Container Platform attempts to pull the images in the order listed in the configuration, for example: 1.3. Getting cluster version, status, and update details You can view the cluster version and status by running the oc get clusterversion command. If the status shows that the installation is still progressing, you can review the status of the Operators for more information. You can also list the current update channel and review the available cluster updates. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Obtain the cluster version and overall status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4 The example output indicates that the cluster has been installed successfully. If the cluster status indicates that the installation is still progressing, you can obtain more detailed progress information by checking the status of the Operators: USD oc get clusteroperators.config.openshift.io View a detailed summary of cluster specifications, update availability, and update history: USD oc describe clusterversion List the current update channel: USD oc get clusterversion -o jsonpath='{.items[0].spec}{"\n"}' Example output {"channel":"stable-4.6","clusterID":"245539c1-72a3-41aa-9cec-72ed8cf25c5c"} Review the available cluster updates: USD oc adm upgrade Example output Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39 Additional resources See Querying Operator status after installation for more information about querying Operator status if your installation is still progressing. See Troubleshooting Operator issues for information about investigating issues with Operators. See Updating a cluster between minor versions for more information on updating your cluster. See Understanding update channels and releases for an overview about update release channels. 1.4. Querying the status of the cluster nodes by using the CLI You can verify the status of the cluster nodes after an installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure List the status of the cluster nodes. Verify that the output lists all of the expected control plane and compute nodes and that each node has a Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.26.0 control-plane-1.example.com Ready master 41m v1.26.0 control-plane-2.example.com Ready master 45m v1.26.0 compute-2.example.com Ready worker 38m v1.26.0 compute-3.example.com Ready worker 33m v1.26.0 control-plane-3.example.com Ready master 41m v1.26.0 Review CPU and memory resource availability for each cluster node: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27% Additional resources See Verifying node health for more details about reviewing node health and investigating node issues. 1.5. Reviewing the cluster status from the OpenShift Container Platform web console You can review the following information in the Overview page in the OpenShift Container Platform web console: The general status of your cluster The status of the control plane, cluster Operators, and storage CPU, memory, file system, network transfer, and pod availability The API address of the cluster, the cluster ID, and the name of the provider Cluster version information Cluster update status, including details of the current update channel and available updates A cluster inventory detailing node, pod, storage class, and persistent volume claim (PVC) information A list of ongoing cluster activities and recent events Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to Home Overview . 1.6. Reviewing the cluster status from Red Hat OpenShift Cluster Manager From the OpenShift Container Platform web console, you can review detailed information about the status of your cluster on OpenShift Cluster Manager. Prerequisites You are logged in to OpenShift Cluster Manager Hybrid Cloud Console . You have access to the cluster as a user with the cluster-admin role. Procedure Go to the Clusters list in OpenShift Cluster Manager Hybrid Cloud Console and locate your OpenShift Container Platform cluster. Click the Overview tab for your cluster. Review the following information about your cluster: vCPU and memory availability and resource usage The cluster ID, status, type, region, and the provider name Node counts by node type Cluster version details, the creation date of the cluster, and the name of the cluster owner The life cycle support status of the cluster Subscription information, including the service level agreement (SLA) status, the subscription unit type, the production status of the cluster, the subscription obligation, and the service level Tip To view the history for your cluster, click the Cluster history tab. Navigate to the Monitoring page to review the following information: A list of any issues that have been detected A list of alerts that are firing The cluster Operator status and version The cluster's resource usage Optional: You can view information about your cluster that Red Hat Insights collects by navigating to the Overview menu. From this menu you can view the following information: Potential issues that your cluster might be exposed to, categorized by risk level Health-check status by category Additional resources See Using Insights to identify issues with your cluster for more information about reviewing potential issues with your cluster. 1.7. Checking cluster resource availability and utilization OpenShift Container Platform provides a comprehensive set of monitoring dashboards that help you understand the state of cluster components. In the Administrator perspective, you can access dashboards for core OpenShift Container Platform components, including: etcd Kubernetes compute resources Kubernetes network resources Prometheus Dashboards relating to cluster and node performance Figure 1.1. Example compute resources dashboard Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective in the OpenShift Container Platform web console, navigate to Observe Dashboards . Choose a dashboard in the Dashboard list. Some dashboards, such as the etcd dashboard, produce additional sub-menus when selected. Optional: Select a time range for the graphs in the Time Range list. Select a pre-defined time period. Set a custom time range by selecting Custom time range in the Time Range list. Input or select the From and To dates and times. Click Save to save the custom time range. Optional: Select a Refresh Interval . Hover over each of the graphs within a dashboard to display detailed information about specific items. Additional resources See Monitoring overview for more information about the OpenShift Container Platform monitoring stack. 1.8. Listing alerts that are firing Alerts provide notifications when a set of defined conditions are true in an OpenShift Container Platform cluster. You can review the alerts that are firing in your cluster by using the Alerting UI in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure In the Administrator perspective, navigate to the Observe Alerting Alerts page. Review the alerts that are firing, including their Severity , State , and Source . Select an alert to view more detailed information in the Alert Details page. Additional resources See Managing alerts for further details about alerting in OpenShift Container Platform. 1.9. steps See Troubleshooting installations if you experience issues when installing your cluster. After installing OpenShift Container Platform, you can further expand and customize your cluster . | [
"cat <install_dir>/.openshift_install.log",
"time=\"2020-12-03T09:50:47Z\" level=info msg=\"Install complete!\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Login to the console with user: \\\"kubeadmin\\\", and password: \\\"password\\\"\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Time elapsed per stage:\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Infrastructure: 6m45s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Bootstrap Complete: 11m30s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Bootstrap Destroy: 1m5s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Cluster Operators: 17m31s\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Time elapsed: 37m26s\"",
"oc adm node-logs <node_name> -u crio",
"Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time=\"2021-08-05 10:33:21.594930907Z\" level=info msg=\"Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le\" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.194341109Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.226788351Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\"",
"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\" Trying to access \\\"li0317gcp2.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4",
"oc get clusteroperators.config.openshift.io",
"oc describe clusterversion",
"oc get clusterversion -o jsonpath='{.items[0].spec}{\"\\n\"}'",
"{\"channel\":\"stable-4.6\",\"clusterID\":\"245539c1-72a3-41aa-9cec-72ed8cf25c5c\"}",
"oc adm upgrade",
"Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.26.0 control-plane-1.example.com Ready master 41m v1.26.0 control-plane-2.example.com Ready master 45m v1.26.0 compute-2.example.com Ready worker 38m v1.26.0 compute-3.example.com Ready worker 33m v1.26.0 control-plane-3.example.com Ready master 41m v1.26.0",
"oc adm top nodes",
"NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27%"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/validation_and_troubleshooting/validating-an-installation |
Appendix C. Revision History | Appendix C. Revision History Revision History Revision 0.5-0 Wed Feb 12 2020 Jaroslav Klech Provided a complete kernel version to Architectures and New Features chapters. Revision 0.4-9 Mon Oct 07 2019 Jiri Herrmann Clarified a Technology Preview note related to OVMF. Revision 0.4-8 Mon May 13 2019 Lenka Spackova Added a known issue related to freeradius upgrade (Networking). Revision 0.4-7 Sun Apr 28 2019 Lenka Spackova Improved wording of a Technology Preview feature description (File Systems). Revision 0.4-6 Mon Feb 04 2019 Lenka Spackova Improved structure of the book. Revision 0.4-5 Thu Sep 13 2018 Lenka Spackova Moved CephFS from Technology Previews to fully supported features (File Systems). Revision 0.4-4 Tue Apr 17 2018 Lenka Spackova Updated a recommendation related to the sslwrap() deprecation. Revision 0.4-3 Tue Apr 10 2018 Lenka Spackova Added a deprecation note related to the inputname option of the rsyslog imudp module. Revision 0.4-2 Thu Apr 05 2018 Lenka Spackova Moved CAT to Technology Previews (Kernel). Revision 0.4-1 Thu Mar 22 2018 Lenka Spackova Fixed the openldap-servers package name (Deprecated Functionality). Revision 0.4-0 Fri Mar 16 2018 Lenka Spackova Added a pcs-related bug fix (Clustering). Revision 0.3-9 Mon Feb 19 2018 Mirek Jahoda The incorrectly placed TPM-related features moved to the Technnology Preview section. Revision 0.3-8 Tue Feb 06 2018 Lenka Spackova Added a missing Technology Preview - OVMF (Virtualization). Added information regarding deprecation of containers using the libvirt-lxc tooling. Revision 0.3-7 Wed Jan 17 2018 Lenka Spackova Updated the FCoE deprecation notice. Revision 0.3-6 Wed Jan 10 2018 Lenka Spackova Changed the status of Device DAX for NVDIMM devices from Technology Preview to fully supported (Storage). Revision 0.3-5 Thu Dec 14 2017 Lenka Spackova Unified the structure of deprecated drivers. Revision 0.3-4 Tue Dec 12 2017 Lenka Spackova Updated deprecated adapters from the qla2xxx driver. Revision 0.3-3 Wed Nov 22 2017 Lenka Spackova Added information regarding pam_krb5 to sssd migration (Deprecated Functionality). Revision 0.3-2 Wed Nov 15 2017 Lenka Spackova Fixed a typo. Revision 0.3-1 Tue Oct 31 2017 Lenka Spackova Added an LVM-related bug fix description (Storage). Revision 0.3-3 Mon Oct 30 2017 Lenka Spackova Added an autofs bug fix description (File Systems). Added information on changes in the ld linker behavior to Deprecated Functionality. Revision 0.3-2 Wed Sep 13 2017 Lenka Spackova Added information regarding limited support for visuals in the Xorg server. Revision 0.3-1 Mon Sep 11 2017 Lenka Spackova Added CUIR enhanced scope detection to Technology Previews (Kernel). Updated openssh rebase description in New Features (Security). Revision 0.3-0 Mon Sep 04 2017 Lenka Spackova Added two known issues (Security, Desktop). Revision 0.2-9 Mon Aug 21 2017 Lenka Spackova Added tcp_wrappers to Deprecated Functionality. Revision 0.2-8 Tue Aug 15 2017 Lenka Spackova Added several new features and a known issue. Revision 0.2-7 Mon Aug 14 2017 Lenka Spackova Removed a duplicate note. Revision 0.2-6 Thu Aug 10 2017 Lenka Spackova Updated several known issues. Revision 0.2-5 Tue Aug 08 2017 Lenka Spackova Added two known issues. Revision 0.2-4 Mon Aug 07 2017 Lenka Spackova Updated FCoE deprecation notice. Minor updates and additions. Revision 0.2-3 Fri Aug 04 2017 Lenka Spackova Moved several new features from Virtualization to System and Subscription Management. Revision 0.2-2 Thu Aug 03 2017 Lenka Spackova Updated information on Btrfs ; it is now both in the Technology Previews and Deprecated Functionality parts. Minor updates and additions. Revision 0.2-1 Tue Aug 01 2017 Lenka Spackova Release of the Red Hat Enterprise Linux 7.4 Release Notes. Revision 0.0-4 Tue May 23 2017 Lenka Spackova Release of the Red Hat Enterprise Linux 7.4 Beta Release Notes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/appe-7.4_release_notes-revision_history |
Chapter 5. Post-installation node tasks | Chapter 5. Post-installation node tasks After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements through certain node tasks. 5.1. Adding RHEL compute machines to an OpenShift Container Platform cluster Understand and work with RHEL compute nodes. 5.1.1. About adding RHEL compute nodes to a cluster In OpenShift Container Platform 4.7, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines, which are also known as worker machines, in your cluster if you use a user-provisioned infrastructure installation. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane, or master, machines in your cluster. As with all installations that use user-provisioned infrastructure, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Important Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. Important Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. You must add any RHEL compute machines to the cluster after you initialize the control plane. 5.1.2. System requirements for RHEL compute nodes The Red Hat Enterprise Linux (RHEL) compute, or worker, machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements: You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information. Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. Each system must meet the following hardware requirements: Physical or virtual system, or an instance running on a public or private IaaS. Base OS: RHEL 7.9 with "Minimal" installation option. Important Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is deprecated. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. In addition, you must not upgrade your compute machines to RHEL 8 because support is not available in this release. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Enabling FIPS Mode in the RHEL 7 documentation. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. NetworkManager 1.0 or later. 1 vCPU. Minimum 8 GB RAM. Minimum 15 GB hard disk space for the file system containing /var/ . Minimum 1 GB hard disk space for the file system containing /usr/local/bin/ . Minimum 1 GB hard disk space for the file system containing the system's temporary directory. The system's temporary directory is determined according to the rules defined in the tempfile module in Python's standard library. Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=true attribute must be set. Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow the system access to the cluster's API service endpoints. 5.1.2.1. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 5.1.3. Preparing the machine to run the playbook Before you can add compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.7 cluster, you must prepare a RHEL 7 machine to run an Ansible playbook that adds the new node to the cluster. This machine is not part of the cluster but must be able to access it. Prerequisites Install the OpenShift CLI ( oc ) on the machine that you run the playbook on. Log in as a user with cluster-admin permission. Procedure Ensure that the kubeconfig file for the cluster and the installation program that you used to install the cluster are on the machine. One way to accomplish this is to use the same machine that you used to install the cluster. Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. Important If you use SSH key-based authentication, you must manage the key with an SSH agent. If you have not already done so, register the machine with RHSM and attach a pool with an OpenShift subscription to it: Register the machine with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Container Platform 4.7: # subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-ansible-2.9-rpms" \ --enable="rhel-7-server-ose-4.7-rpms" Install the required packages, including openshift-ansible : # yum install openshift-ansible openshift-clients jq The openshift-ansible package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. The openshift-clients provides the oc CLI, and the jq package improves the display of JSON output on your command line. 5.1.4. Preparing a RHEL compute node Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories. On each host, register with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Disable all yum repositories: Disable all the enabled RHSM repositories: # subscription-manager repos --disable="*" List the remaining yum repositories and note their names under repo id , if any: # yum repolist Use yum-config-manager to disable the remaining yum repositories: # yum-config-manager --disable <repo_id> Alternatively, disable all repositories: # yum-config-manager --disable \* Note that this might take a few minutes if you have a large number of available repositories Enable only the repositories required by OpenShift Container Platform 4.7: # subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-fast-datapath-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-optional-rpms" \ --enable="rhel-7-server-ose-4.7-rpms" Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. 5.1.5. Adding a RHEL compute machine to your cluster You can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.7 cluster. Prerequisites You installed the required packages and performed the necessary configuration on the machine that you run the playbook on. You prepared the RHEL hosts for installation. Procedure Perform the following steps on the machine that you prepared to run the playbook: Create an Ansible inventory file that is named /<path>/inventory/hosts that defines your compute machine hosts and required variables: 1 Specify the user name that runs the Ansible tasks on the remote compute machines. 2 If you do not specify root for the ansible_user , you must set ansible_become to True and assign the user sudo permissions. 3 Specify the path and file name of the kubeconfig file for your cluster. 4 List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the hostname that the cluster uses to access the machine, so set the correct public or private name to access the machine. Navigate to the Ansible playbook directory: USD cd /usr/share/ansible/openshift-ansible Run the playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. 5.1.6. Required parameters for the Ansible hosts file You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. Paramter Description Values ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. A user name on the system. The default value is root . ansible_become If the values of ansible_user is not root, you must set ansible_become to True , and the user that you specify as the ansible_user must be configured for passwordless sudo access. True . If the value is not True , do not specify and define this parameter. openshift_kubeconfig_path Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster. The path and name of the configuration file. 5.1.7. Optional: Removing RHCOS compute machines from a cluster After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can optionally remove the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to free up resources. Prerequisites You have added RHEL compute machines to your cluster. Procedure View the list of machines and record the node names of the RHCOS compute machines: USD oc get nodes -o wide For each RHCOS compute machine, delete the node: Mark the node as unschedulable by running the oc adm cordon command: USD oc adm cordon <node_name> 1 1 Specify the node name of one of the RHCOS compute machines. Drain all the pods from the node: USD oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1 1 Specify the node name of the RHCOS compute machine that you isolated. Delete the node: USD oc delete nodes <node_name> 1 1 Specify the node name of the RHCOS compute machine that you drained. Review the list of compute machines to ensure that only the RHEL nodes remain: USD oc get nodes -o wide Remove the RHCOS machines from the load balancer for your cluster's compute machines. You can delete the virtual machines or reimage the physical hardware for the RHCOS compute machines. 5.2. Adding RHCOS compute machines to an OpenShift Container Platform cluster You can add more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to your OpenShift Container Platform cluster on bare metal. Before you add more compute machines to a cluster that you installed on bare metal infrastructure, you must create RHCOS machines for it to use. You can either use an ISO image or network PXE booting to create the machines. 5.2.1. Prerequisites You installed a cluster on bare metal. You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure . 5.2.2. Creating more RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Procedure Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. After the instance boots, press the TAB or E key to edit the kernel command line. Add the parameters to the kernel command line: coreos.inst.install_dev=sda 1 coreos.inst.ignition_url=http://example.com/worker.ign 2 1 Specify the block device of the system to install to. 2 Specify the URL of the compute Ignition config file. Only HTTP and HTTPS protocols are supported. Press Enter to complete the installation. After RHCOS installs, the system reboots. After the system reboots, it applies the Ignition config file that you specified. Continue to create more compute machines for your cluster. 5.2.3. Creating more RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE: 1 Specify locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. 2 Specify the location of the initramfs file that you uploaded to your HTTP server. This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 5.2.4. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 5.3. Deploying machine health checks Understand and deploy machine health checks. Important This process is not applicable for clusters with manually provisioned machines. You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. 5.3.1. About machine health checks Machine health checks automatically repair unhealthy machines in a particular machine pool. To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor. Note You cannot apply a machine health check to a machine with the master role. The controller that observes a MachineHealthCheck resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention. Note Consider the timeouts carefully, accounting for workloads and requirements. Long timeouts can result in long periods of downtime for the workload on the unhealthy machine. Too short timeouts can result in a remediation loop. For example, the timeout for checking the NotReady status must be long enough to allow the machine to complete the startup process. To stop the check, remove the resource. 5.3.1.1. Limitations when deploying machine health checks There are limitations to consider before deploying a machine health check: Only machines owned by a machine set are remediated by a machine health check. Control plane machines are not currently supported and are not remediated if they are unhealthy. If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately. If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout , the machine is remediated. A machine is remediated immediately if the Machine resource phase is Failed . 5.3.2. Sample MachineHealthCheck resource The MachineHealthCheck resource for all cloud-based installation types, and other than bare metal, resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: "Ready" timeout: "300s" 5 status: "False" - type: "Ready" timeout: "300s" 6 status: "Unknown" maxUnhealthy: "40%" 7 nodeStartupTimeout: "10m" 8 1 Specify the name of the machine health check to deploy. 2 3 Specify a label for the machine pool that you want to check. 4 Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 5 6 Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 7 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 8 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. 5.3.2.1. Short-circuiting machine health check remediation Short circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the maxUnhealthy field in the MachineHealthCheck resource. If the user defines a value for the maxUnhealthy field, before remediating any machines, the MachineHealthCheck compares the value of maxUnhealthy with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy limit. Important If maxUnhealthy is not set, the value defaults to 100% and the machines are remediated regardless of the state of the cluster. The appropriate maxUnhealthy value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck covers. For example, you can use the maxUnhealthy value to cover multiple machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy setting prevents further remediation within the cluster. The maxUnhealthy field can be set as either an integer or percentage. There are different remediation implementations depending on the maxUnhealthy value. 5.3.2.1.1. Setting maxUnhealthy by using an absolute value If maxUnhealthy is set to 2 : Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy These values are independent of how many machines are being checked by the machine health check. 5.3.2.1.2. Setting maxUnhealthy by using percentages If maxUnhealthy is set to 40% and there are 25 machines being checked: Remediation will be performed if 10 or fewer nodes are unhealthy Remediation will not be performed if 11 or more nodes are unhealthy If maxUnhealthy is set to 40% and there are 6 machines being checked: Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy Note The allowed number of machines is rounded down when the percentage of maxUnhealthy machines that are checked is not a whole number. 5.3.3. Creating a MachineHealthCheck resource You can create a MachineHealthCheck resource for all MachineSets in your cluster. You should not create a MachineHealthCheck resource that targets control plane machines. Prerequisites Install the oc command line interface. Procedure Create a healthcheck.yml file that contains the definition of your machine health check. Apply the healthcheck.yml file to your cluster: USD oc apply -f healthcheck.yml 5.3.4. Scaling a machine set manually To add or remove an instance of a machine in a machine set, you can manually scale the machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the machine sets that are in the cluster: USD oc get machinesets -n openshift-machine-api The machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the machines that are in the cluster: USD oc get machine -n openshift-machine-api Set the annotation on the machine that you want to delete: USD oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine="true" Cordon and drain the node that you want to delete: USD oc adm cordon <node_name> USD oc adm drain <node_name> Scale the machine set: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api You can scale the machine set up or down. It takes several minutes for the new machines to be available. Verification Verify the deletion of the intended machine: USD oc get machines 5.3.5. Understanding the difference between machine sets and the machine config pool MachineSet objects describe OpenShift Container Platform nodes with respect to the cloud or machine provider. The MachineConfigPool object allows MachineConfigController components to define and provide the status of machines in the context of upgrades. The MachineConfigPool object allows users to configure how upgrades are rolled out to the OpenShift Container Platform nodes in the machine config pool. The NodeSelector object can be replaced with a reference to the MachineSet object. 5.4. Recommended node host practices The OpenShift Container Platform node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the pool of IP addresses. Resource overcommitting, leading to poor user application performance. Important In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. Note Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. podsPerCore sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40 . kubeletConfig: podsPerCore: 10 Setting podsPerCore to 0 disables this limit. The default is 0 . podsPerCore cannot exceed maxPods . maxPods sets the number of pods the node can run to a fixed value, regardless of the properties of the node. kubeletConfig: maxPods: 250 5.4.1. Creating a KubeletConfig CRD to edit kubelet parameters The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller added to the Machine Config Controller (MCC). This lets you use a KubeletConfig custom resource (CR) to edit the kubelet parameters. Note As the fields in the kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the kubelet validates those values directly. Invalid values in the kubeletConfig object might cause cluster nodes to become unavailable. For valid values, see the Kubernetes documentation . Consider the following guidance: Create one KubeletConfig CR for each machine config pool with all the config changes you want for that pool. If you are applying the same content to all of the pools, you need only one KubeletConfig CR for all of the pools. Edit an existing KubeletConfig CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes. As needed, create multiple KubeletConfig CRs with a limit of 10 per cluster. For the first KubeletConfig CR, the Machine Config Operator (MCO) creates a machine config appended with kubelet . With each subsequent CR, the controller creates another kubelet machine config with a numeric suffix. For example, if you have a kubelet machine config with a -2 suffix, the kubelet machine config is appended with -3 . If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3 machine config before deleting the kubelet-2 machine config. Note If you have a machine config with a kubelet-9 suffix, and you create another KubeletConfig CR, a new machine config is not created, even if there are fewer than 10 kubelet machine configs. Example KubeletConfig CR USD oc get kubeletconfig NAME AGE set-max-pods 15m Example showing a KubeletConfig machine config USD oc get mc | grep kubelet ... 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m ... The following procedure is an example to show how to configure the maximum number of pods per node on the worker nodes. Prerequisites Obtain the label associated with the static MachineConfigPool CR for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=set-max-pods Procedure View the available machine configuration objects that you can select: USD oc get machineconfig By default, the two kubelet-related configs are 01-master-kubelet and 01-worker-kubelet . Check the current value for the maximum pods per node: USD oc describe node <node_name> For example: USD oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94 Look for value: pods: <value> in the Allocatable stanza: Example output Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250 Set the maximum pods per node on the worker nodes by creating a custom resource file that contains the kubelet configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2 1 Enter the label from the machine config pool. 2 Add the kubelet configuration. In this example, use maxPods to set the maximum pods per node. Note The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values, 50 for kubeAPIQPS and 100 for kubeAPIBurst , are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS> Update the machine config pool for workers with the label: USD oc label machineconfigpool worker custom-kubelet=large-pods Create the KubeletConfig object: USD oc create -f change-maxPods-cr.yaml Verify that the KubeletConfig object is created: USD oc get kubeletconfig Example output NAME AGE set-max-pods 15m Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes. Verify that the changes are applied to the node: Check on a worker node that the maxPods value changed: USD oc describe node <node_name> Locate the Allocatable stanza: ... Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1 ... 1 In this example, the pods parameter should report the value you set in the KubeletConfig object. Verify the change in the KubeletConfig object: USD oc get kubeletconfigs set-max-pods -o yaml This should show a status: "True" and type:Success : spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: "2021-06-30T17:04:07Z" message: Success status: "True" type: Success 5.4.2. Modifying the number of unavailable worker nodes By default, only one machine is allowed to be unavailable when applying the kubelet-related configuration to the available worker nodes. For a large cluster, it can take a long time for the configuration change to be reflected. At any time, you can adjust the number of machines that are updating to speed up the process. Procedure Edit the worker machine config pool: USD oc edit machineconfigpool worker Set maxUnavailable to the value that you want: spec: maxUnavailable: <node_count> Important When setting the value, consider the number of worker nodes that can be unavailable without affecting the applications running on the cluster. 5.4.3. Control plane node sizing The control plane node resource requirements depend on the number of nodes in the cluster. The following control plane node size recommendations are based on the results of control plane density focused testing. The control plane tests create the following objects across the cluster in each of the namespaces depending on the node counts: 12 image streams 3 build configurations 6 builds 1 deployment with 2 pod replicas mounting two secrets each 2 deployments with 1 pod replica mounting two secrets 3 services pointing to the deployments 3 routes pointing to the deployments 10 secrets, 2 of which are mounted by the deployments 10 config maps, 2 of which are mounted by the deployments Number of worker nodes Cluster load (namespaces) CPU cores Memory (GB) 25 500 4 16 100 1000 8 32 250 4000 16 96 On a large and dense cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails. The failures can be due to unexpected issues with power, network or underlying infrastructure in addition to intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane nodes must handle the load in order to be highly available which leads to increase in the resource usage. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall CPU and memory resource usage on the control plane nodes to at most 60% of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources. Important The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the running phase. Important If you used an installer-provisioned infrastructure installation method, you cannot modify the control plane node size in a running OpenShift Container Platform 4.7 cluster. Instead, you must estimate your total node count and use the suggested control plane node size during installation. Important The recommendations are based on the data points captured on OpenShift Container Platform clusters with OpenShiftSDN as the network plug-in. Note In OpenShift Container Platform 4.7, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. The sizes are determined taking that into consideration. 5.4.4. Setting up CPU Manager Procedure Optional: Label a node: # oc label node perf-node.example.com cpumanager=true Edit the MachineConfigPool of the nodes where CPU Manager should be enabled. In this example, all workers have CPU Manager enabled: # oc edit machineconfigpool worker Add a label to the worker machine config pool: metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. static . This policy allows pods with certain resource characteristics to be granted increased CPU affinity and exclusivity on the node. 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the worker for the updated kubelet.conf : # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 2 These settings were defined when you created the KubeletConfig CR. Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verify that the pod is scheduled to the node that you labeled: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process: # ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause Pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice . Pods of other QoS tiers end up in child cgroups of kubepods : # cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus tasks` ; do echo -n "USDi "; cat USDi ; done Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod (in this case, the pod in the burstable QoS tier) on the system cannot run on the core allocated for the Guaranteed pod: # cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s 5.5. Huge pages Understand and configure huge pages. 5.5.1. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. 5.5.2. How huge pages are consumed by apps Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size. Huge pages can be consumed through container-level resource requirements using the resource name hugepages-<size> , where size is the most compact binary notation using integer values supported on a particular node. For example, if a node supports 2048KiB page sizes, it exposes a schedulable resource hugepages-2Mi . Unlike CPU or memory, huge pages do not support over-commitment. apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: "1Gi" cpu: "1" volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the amount of memory for hugepages as the exact amount to be allocated. Do not specify this value as the amount of memory for hugepages multiplied by the size of the page. For example, given a huge page size of 2MB, if you want to use 100MB of huge-page-backed RAM for your application, then you would allocate 50 huge pages. OpenShift Container Platform handles the math for you. As in the above example, you can specify 100MB directly. Allocating huge pages of a specific size Some platforms support multiple huge page sizes. To allocate huge pages of a specific size, precede the huge pages boot command parameters with a huge page size selection parameter hugepagesz=<size> . The <size> value must be specified in bytes with an optional scale suffix [ kKmMgG ]. The default huge page size can be defined with the default_hugepagesz=<size> boot parameter. Huge page requirements Huge page requests must equal the limits. This is the default if limits are specified, but requests are not. Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration. EmptyDir volumes backed by huge pages must not consume more huge page memory than the pod request. Applications that consume huge pages via shmget() with SHM_HUGETLB must run with a supplemental group that matches proc/sys/vm/hugetlb_shm_group . Additional resources Configuring Transparent Huge Pages 5.5.3. Configuring huge pages Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot time allocation of huge pages on specific nodes. 5.5.3.1. At boot time Procedure To minimize node reboots, the order of the steps below needs to be followed: Label all nodes that need the same huge pages setting by a label. USD oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp= Create a file with the following content and name it hugepages-tuned-boottime.yaml : apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: "worker-hp" priority: 30 profile: openshift-node-hugepages 1 Set the name of the Tuned resource to hugepages . 2 Set the profile section to allocate huge pages. 3 Note the order of parameters is important as some platforms support huge pages of various sizes. 4 Enable machine config pool based matching. Create the Tuned hugepages profile USD oc create -f hugepages-tuned-boottime.yaml Create a file with the following content and name it hugepages-mcp.yaml : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: "" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: "" Create the machine config pool: USD oc create -f hugepages-mcp.yaml Given enough non-fragmented memory, all the nodes in the worker-hp machine config pool should now have 50 2Mi huge pages allocated. USD oc get node <node_using_hugepages> -o jsonpath="{.status.allocatable.hugepages-2Mi}" 100Mi Warning This functionality is currently only supported on Red Hat Enterprise Linux CoreOS (RHCOS) 8.x worker nodes. On Red Hat Enterprise Linux (RHEL) 7.x worker nodes the Tuned [bootloader] plug-in is currently not supported. 5.6. Understanding device plug-ins The device plug-in provides a consistent and portable solution to consume hardware devices across clusters. The device plug-in provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them. Important OpenShift Container Platform supports the device plug-in API, but the device plug-in Containers are supported by individual vendors. A device plug-in is a gRPC service running on the nodes (external to the kubelet ) that is responsible for managing specific hardware resources. Any device plug-in must support following remote procedure calls (RPCs): service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as reseting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} } Example device plug-ins Nvidia GPU device plug-in for COS-based operating system Nvidia official GPU device plug-in Solarflare device plug-in KubeVirt device plug-ins: vfio and kvm Note For easy device plug-in reference implementation, there is a stub device plug-in in the Device Manager code: vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go . 5.6.1. Methods for deploying a device plug-in Daemon sets are the recommended approach for device plug-in deployments. Upon start, the device plug-in will try to create a UNIX domain socket at /var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager. Since device plug-ins must manage hardware resources, access to the host file system, as well as socket creation, they must be run in a privileged security context. More specific details regarding deployment steps can be found with each device plug-in implementation. 5.6.2. Understanding the Device Manager Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plug-ins known as device plug-ins. You can advertise specialized hardware without requiring any upstream code changes. Important OpenShift Container Platform supports the device plug-in API, but the device plug-in Containers are supported by individual vendors. Device Manager advertises devices as Extended Resources . User pods can consume devices, advertised by Device Manager, using the same Limit/Request mechanism, which is used for requesting any other Extended Resource . Upon start, the device plug-in registers itself with Device Manager invoking Register on the /var/lib/kubelet/device-plugins/kubelet.sock and starts a gRPC service at /var/lib/kubelet/device-plugins/<plugin>.sock for serving Device Manager requests. Device Manager, while processing a new registration request, invokes ListAndWatch remote procedure call (RPC) at the device plug-in service. In response, Device Manager gets a list of Device objects from the plug-in over a gRPC stream. Device Manager will keep watching on the stream for new updates from the plug-in. On the plug-in side, the plug-in will also keep the stream open and whenever there is a change in the state of any of the devices, a new device list is sent to the Device Manager over the same streaming connection. While handling a new pod admission request, Kubelet passes requested Extended Resources to the Device Manager for device allocation. Device Manager checks in its database to verify if a corresponding plug-in exists or not. If the plug-in exists and there are free allocatable devices as well as per local cache, Allocate RPC is invoked at that particular device plug-in. Additionally, device plug-ins can also perform several other device-specific operations, such as driver installation, device initialization, and device resets. These functionalities vary from implementation to implementation. 5.6.3. Enabling Device Manager Enable Device Manager to implement a device plug-in to advertise specialized hardware without any upstream code changes. Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plug-ins known as device plug-ins. Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure. Perform one of the following steps: View the machine config: # oc describe machineconfig <name> For example: # oc describe machineconfig 00-worker Example output Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1 1 Label required for the Device Manager. Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a Device Manager CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3 1 Assign a name to CR. 2 Enter the label from the Machine Config Pool. 3 Set DevicePlugins to 'true`. Create the Device Manager: USD oc create -f devicemgr.yaml Example output kubeletconfig.machineconfiguration.openshift.io/devicemgr created Ensure that Device Manager was actually enabled by confirming that /var/lib/kubelet/device-plugins/kubelet.sock is created on the node. This is the UNIX domain socket on which the Device Manager gRPC server listens for new plug-in registrations. This sock file is created when the Kubelet is started only if Device Manager is enabled. 5.7. Taints and tolerations Understand and work with taints and tolerations. 5.7.1. Understanding taints and tolerations A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration . You apply taints to a node through the Node specification ( NodeSpec ) and apply tolerations to a pod through the Pod specification ( PodSpec ). When you apply a taint a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint. Example taint in a node specification spec: .... template: .... spec: taints: - effect: NoExecute key: key1 value: value1 .... Example toleration in a Pod spec spec: .... template: .... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 .... Taints and tolerations consist of a key, value, and effect. Table 5.1. Taint and toleration components Parameter Description key The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. value The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. effect The effect is one of the following: NoSchedule [1] New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. operator Equal The key / value / effect parameters must match. This is the default. Exists The key / effect parameters must match. You must leave a blank value parameter, which matches any. If you add a NoSchedule taint to a control plane node (also known as the master node) the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c ... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master ... A toleration matches a taint: If the operator parameter is set to Equal : the key parameters are the same; the value parameters are the same; the effect parameters are the same. If the operator parameter is set to Exists : the key parameters are the same; the effect parameters are the same. The following taints are built into OpenShift Container Platform: node.kubernetes.io/not-ready : The node is not ready. This corresponds to the node condition Ready=False . node.kubernetes.io/unreachable : The node is unreachable from the node controller. This corresponds to the node condition Ready=Unknown . node.kubernetes.io/memory-pressure : The node has memory pressure issues. This corresponds to the node condition MemoryPressure=True . node.kubernetes.io/disk-pressure : The node has disk pressure issues. This corresponds to the node condition DiskPressure=True . node.kubernetes.io/network-unavailable : The node network is unavailable. node.kubernetes.io/unschedulable : The node is unschedulable. node.cloudprovider.kubernetes.io/uninitialized : When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure : The node has pid pressure. This corresponds to the node condition PIDPressure=True . Important OpenShift Container Platform does not set a default pid.available evictionHard . 5.7.1.1. Understanding how to use toleration seconds to delay pod evictions You can specify how long a pod can remain bound to a node before being evicted by specifying the tolerationSeconds parameter in the Pod specification or MachineSet object. If a taint with the NoExecute effect is added to a node, a pod that does tolerate the taint, which has the tolerationSeconds parameter, the pod is not evicted until that time period expires. Example output spec: .... template: .... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 Here, if this pod is running but does not have a matching toleration, the pod stays bound to the node for 3,600 seconds and then be evicted. If the taint is removed before that time, the pod is not evicted. 5.7.1.2. Understanding how to use multiple taints You can put multiple taints on the same node and multiple tolerations on the same pod. OpenShift Container Platform processes multiple taints and tolerations as follows: Process the taints for which the pod has a matching toleration. The remaining unmatched taints have the indicated effects on the pod: If there is at least one unmatched taint with effect NoSchedule , OpenShift Container Platform cannot schedule a pod onto that node. If there is no unmatched taint with effect NoSchedule but there is at least one unmatched taint with effect PreferNoSchedule , OpenShift Container Platform tries to not schedule the pod onto the node. If there is at least one unmatched taint with effect NoExecute , OpenShift Container Platform evicts the pod from the node if it is already running on the node, or the pod is not scheduled onto the node if it is not yet running on the node. Pods that do not tolerate the taint are evicted immediately. Pods that tolerate the taint without specifying tolerationSeconds in their Pod specification remain bound forever. Pods that tolerate the taint with a specified tolerationSeconds remain bound for the specified amount of time. For example: Add the following taints to the node: USD oc adm taint nodes node1 key1=value1:NoSchedule USD oc adm taint nodes node1 key1=value1:NoExecute USD oc adm taint nodes node1 key2=value2:NoSchedule The pod has the following tolerations: spec: .... template: .... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoSchedule" - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" In this case, the pod cannot be scheduled onto the node, because there is no toleration matching the third taint. The pod continues running if it is already running on the node when the taint is added, because the third taint is the only one of the three that is not tolerated by the pod. 5.7.1.3. Understanding pod scheduling and node conditions (taint node by condition) The Taint Nodes By Condition feature, which is enabled by default, automatically taints nodes that report conditions such as memory pressure and disk pressure. If a node reports a condition, a taint is added until the condition clears. The taints have the NoSchedule effect, which means no pod can be scheduled on the node unless the pod has a matching toleration. The scheduler checks for these taints on nodes before scheduling pods. If the taint is present, the pod is scheduled on a different node. Because the scheduler checks for taints and not the actual node conditions, you configure the scheduler to ignore some of these node conditions by adding appropriate pod tolerations. To ensure backward compatibility, the daemon set controller automatically adds the following tolerations to all daemons: node.kubernetes.io/memory-pressure node.kubernetes.io/disk-pressure node.kubernetes.io/unschedulable (1.10 or later) node.kubernetes.io/network-unavailable (host network only) You can also add arbitrary tolerations to daemon sets. Note The control plane also adds the node.kubernetes.io/memory-pressure toleration on pods that have a QoS class. This is because Kubernetes manages pods in the Guaranteed or Burstable QoS classes. The new BestEffort pods do not get scheduled onto the affected node. 5.7.1.4. Understanding evicting pods by condition (taint-based evictions) The Taint-Based Evictions feature, which is enabled by default, evicts pods from a node that experiences specific conditions, such as not-ready and unreachable . When a node experiences one of these conditions, OpenShift Container Platform automatically adds taints to the node, and starts evicting and rescheduling the pods on different nodes. Taint Based Evictions have a NoExecute effect, where any pod that does not tolerate the taint is evicted immediately and any pod that does tolerate the taint will never be evicted, unless the pod uses the tolerationSeconds parameter. The tolerationSeconds parameter allows you to specify how long a pod stays bound to a node that has a node condition. If the condition still exists after the tolerationSeconds period, the taint remains on the node and the pods with a matching toleration are evicted. If the condition clears before the tolerationSeconds period, pods with matching tolerations are not removed. If you use the tolerationSeconds parameter with no value, pods are never evicted because of the not ready and unreachable node conditions. Note OpenShift Container Platform evicts pods in a rate-limited way to prevent massive pod evictions in scenarios such as the master becoming partitioned from the nodes. By default, if more than 55% of nodes in a given zone are unhealthy, the node lifecycle controller changes that zone's state to PartialDisruption and the rate of pod evictions is reduced. For small clusters (by default, 50 nodes or less) in this state, nodes in this zone are not tainted and evictions are stopped. For more information, see Rate limits on eviction in the Kubernetes documentation. OpenShift Container Platform automatically adds a toleration for node.kubernetes.io/not-ready and node.kubernetes.io/unreachable with tolerationSeconds=300 , unless the Pod configuration specifies either toleration. spec: .... template: .... spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 1 These tolerations ensure that the default pod behavior is to remain bound for five minutes after one of these node conditions problems is detected. You can configure these tolerations as needed. For example, if you have an application with a lot of local state, you might want to keep the pods bound to node for a longer time in the event of network partition, allowing for the partition to recover and avoiding pod eviction. Pods spawned by a daemon set are created with NoExecute tolerations for the following taints with no tolerationSeconds : node.kubernetes.io/unreachable node.kubernetes.io/not-ready As a result, daemon set pods are never evicted because of these node conditions. 5.7.1.5. Tolerating all taints You can configure a pod to tolerate all taints by adding an operator: "Exists" toleration with no key and value parameters. Pods with this toleration are not removed from a node that has taints. Pod spec for tolerating all taints spec: .... template: .... spec: tolerations: - operator: "Exists" 5.7.2. Adding taints and tolerations You add tolerations to pods and taints to nodes to allow the node to control which pods should or should not be scheduled on them. For existing pods and nodes, you should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with an Equal operator spec: .... template: .... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod can remain bound to a node before being evicted. For example: Sample pod configuration file with an Exists operator spec: .... template: .... spec: tolerations: - key: "key1" operator: "Exists" 1 effect: "NoExecute" tolerationSeconds: 3600 1 The Exists operator does not take a value . This example places a taint on node1 that has key key1 , value value1 , and taint effect NoExecute . Add a taint to a node by using the following command with the parameters described in the Taint and toleration components table: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 key1=value1:NoExecute This command places a taint on node1 that has key key1 , value value1 , and effect NoExecute . Note If you add a NoSchedule taint to a control plane node (also known as the master node) the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c ... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master ... The tolerations on the pod match the taint on the node. A pod with either toleration can be scheduled onto node1 . 5.7.3. Adding taints and tolerations using a machine set You can add taints to nodes using a machine set. All nodes associated with the MachineSet object are updated with the taint. Tolerations respond to taints added by a machine set in the same manner as taints added directly to the nodes. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with Equal operator spec: .... template: .... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod is bound to a node before being evicted. For example: Sample pod configuration file with Exists operator spec: tolerations: - key: "key1" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 Add the taint to the MachineSet object: Edit the MachineSet YAML for the nodes you want to taint or you can create a new MachineSet object: USD oc edit machineset <machineset> Add the taint to the spec.template.spec section: Example taint in a machine set specification spec: .... template: .... spec: taints: - effect: NoExecute key: key1 value: value1 .... This example places a taint that has the key key1 , value value1 , and taint effect NoExecute on the nodes. Scale down the machine set to 0: USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Wait for the machines to be removed. Scale up the machine set as needed: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Wait for the machines to start. The taint is added to the nodes associated with the MachineSet object. 5.7.4. Binding a user to a node using taints and tolerations If you want to dedicate a set of nodes for exclusive use by a particular set of users, add a toleration to their pods. Then, add a corresponding taint to those nodes. The pods with the tolerations are allowed to use the tainted nodes, or any other nodes in the cluster. If you want ensure the pods are scheduled to only those tainted nodes, also add a label to the same set of nodes and add a node affinity to the pods so that the pods can only be scheduled onto nodes with that label. Procedure To configure a node so that users can use only that node: Add a corresponding taint to those nodes: For example: USD oc adm taint nodes node1 dedicated=groupName:NoSchedule Add a toleration to the pods by writing a custom admission controller. 5.7.5. Controlling nodes with special hardware using taints and tolerations In a cluster where a small subset of nodes have specialized hardware, you can use taints and tolerations to keep pods that do not need the specialized hardware off of those nodes, leaving the nodes for pods that do need the specialized hardware. You can also require pods that need specialized hardware to use specific nodes. You can achieve this by adding a toleration to pods that need the special hardware and tainting the nodes that have the specialized hardware. Procedure To ensure nodes with specialized hardware are reserved for specific pods: Add a toleration to pods that need the special hardware. For example: spec: tolerations: - key: "disktype" value: "ssd" operator: "Equal" effect: "NoSchedule" tolerationSeconds: 3600 Taint the nodes that have the specialized hardware using one of the following commands: USD oc adm taint nodes <node-name> disktype=ssd:NoSchedule Or: USD oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule 5.7.6. Removing taints and tolerations You can remove taints from nodes and tolerations from pods as needed. You should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure To remove taints and tolerations: To remove a taint from a node: USD oc adm taint nodes <node-name> <key>- For example: USD oc adm taint nodes ip-10-0-132-248.ec2.internal key1- Example output node/ip-10-0-132-248.ec2.internal untainted To remove a toleration from a pod, edit the Pod spec to remove the toleration: spec: tolerations: - key: "key2" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 5.8. Topology Manager Understand and work with Topology Manager. 5.8.1. Topology Manager policies Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. Note To align CPU resources with other requested resources in a Pod spec, the CPU Manager must be enabled with the static CPU Manager policy. Topology Manager supports four allocation policies, which you assign in the cpumanager-enabled custom resource (CR): none policy This is the default policy and does not perform any topology alignment. best-effort policy For each container in a pod with the best-effort topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restricted policy For each container in a pod with the restricted topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated state with a pod admission failure. single-numa-node policy For each container in a pod with the single-numa-node topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure. 5.8.2. Setting up Topology Manager To use Topology Manager, you must configure an allocation policy in the cpumanager-enabled custom resource (CR). This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file. Prequisites Configure the CPU Manager policy to be static . Refer to Using CPU Manager in the Scalability and Performance section. Procedure To activate Topololgy Manager: Configure the Topology Manager allocation policy in the cpumanager-enabled custom resource (CR). USD oc edit KubeletConfig cpumanager-enabled apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2 1 This parameter must be static . 2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node . Acceptable values are: default , best-effort , restricted , single-numa-node . 5.8.3. Pod interactions with Topology Manager policies The example Pod specs below help illustrate pod interactions with Topology Manager. The following pod runs in the BestEffort QoS class because no resource requests or limits are specified. spec: containers: - name: nginx image: nginx The pod runs in the Burstable QoS class because requests are less than limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" If the selected policy is anything other than none , Topology Manager would not consider either of these Pod specifications. The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Topology Manager would consider this pod. The Topology Manager consults the CPU Manager static policy, which returns the topology of available CPUs. Topology Manager also consults Device Manager to discover the topology of available devices for example.com/device. Topology Manager will use this information to store the best Topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage. 5.9. Resource requests and overcommitment For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node. The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service. Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 200% overcommitted. 5.10. Cluster-level overcommit using the Cluster Resource Override Operator The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits. You must install the Cluster Resource Override Operator using the OpenShift Container Platform console or CLI as shown in the following sections. During the installation, you create a ClusterResourceOverride custom resource (CR), where you set the level of overcommit, as shown in the following example: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50. 3 Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25. 4 Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200. Note The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange object with default limits per individual project or configure limits in Pod specs for the overrides to apply. When configured, overrides can be enabled per-project by applying the following label to the Namespace object for each project: apiVersion: v1 kind: Namespace metadata: .... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" .... The Operator watches for the ClusterResourceOverride CR and ensures that the ClusterResourceOverride admission webhook is installed into the same namespace as the operator. 5.10.1. Installing the Cluster Resource Override Operator using the web console You can use the OpenShift Container Platform web console to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, navigate to Home Projects Click Create Project . Specify clusterresourceoverride-operator as the name of the project. Click Create . Navigate to Operators OperatorHub . Choose ClusterResourceOverride Operator from the list of available Operators and click Install . On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode . Make sure clusterresourceoverride-operator is selected for Installed Namespace . Select an Update Channel and Approval Strategy . Click Install . On the Installed Operators page, click ClusterResourceOverride . On the ClusterResourceOverride Operator details page, click Create Instance . On the Create ClusterResourceOverride page, edit the YAML template to set the overcommit values as needed: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Click Create . Check the current state of the admission webhook by checking the status of the cluster custom resource: On the ClusterResourceOverride Operator page, click cluster . On the ClusterResourceOverride Details age, click YAML . The mutatingWebhookConfigurationRef section appears when the webhook is called. apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: .... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .... 1 Reference to the ClusterResourceOverride admission webhook. 5.10.2. Installing the Cluster Resource Override Operator using the CLI You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the CLI: Create a namespace for the Cluster Resource Override Operator: Create a Namespace object YAML file (for example, cro-namespace.yaml ) for the Cluster Resource Override Operator: apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-namespace.yaml Create an Operator group: Create an OperatorGroup object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator Create the Operator Group: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-og.yaml Create a subscription: Create a Subscription object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: "4.7" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-sub.yaml Create a ClusterResourceOverride custom resource (CR) object in the clusterresourceoverride-operator namespace: Change to the clusterresourceoverride-operator namespace. USD oc project clusterresourceoverride-operator Create a ClusterResourceOverride object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Create the ClusterResourceOverride object: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-cr.yaml Verify the current state of the admission webhook by checking the status of the cluster custom resource. USD oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml The mutatingWebhookConfigurationRef section appears when the webhook is called. Example output apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: .... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .... 1 Reference to the ClusterResourceOverride admission webhook. 5.10.3. Configuring cluster-level overcommit The Cluster Resource Override Operator requires a ClusterResourceOverride custom resource (CR) and a label for each project where you want the Operator to control overcommit. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To modify cluster-level overcommit: Edit the ClusterResourceOverride CR: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3 1 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 2 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 3 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit: apiVersion: v1 kind: Namespace metadata: ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" 1 ... 1 Add this label to each project. 5.11. Node-level overcommit You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects. 5.11.1. Understanding compute resources and containers The node-enforced behavior for compute resources is specific to the resource type. 5.11.1.1. Understanding container CPU requests A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container. For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled. 5.11.1.2. Understanding container memory requests A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node's resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount. 5.11.2. Understanding overcomitment and quality of service classes A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity. In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class. For each compute resource, a container is divided into one of three QoS classes with decreasing order of priority: Table 5.2. Quality of Service Classes Priority Class Name Description 1 (highest) Guaranteed If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the container is classified as Guaranteed . 2 Burstable If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the container is classified as Burstable . 3 (lowest) BestEffort If requests and limits are not set for any of the resources, then the container is classified as BestEffort . Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first: Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted. Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist. BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory. 5.11.2.1. Understanding how to reserve memory across quality of service tiers You can use the qos-reserved parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes. OpenShift Container Platform uses the qos-reserved parameter as follows: A value of qos-reserved=memory=100% will prevent the Burstable and BestEffort QOS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM on BestEffort and Burstable workloads in favor of increasing memory resource guarantees for Guaranteed and Burstable workloads. A value of qos-reserved=memory=50% will allow the Burstable and BestEffort QOS classes to consume half of the memory requested by a higher QoS class. A value of qos-reserved=memory=0% will allow a Burstable and BestEffort QoS classes to consume up to the full node allocatable amount if available, but increases the risk that a Guaranteed workload will not have access to requested memory. This condition effectively disables this feature. 5.11.3. Understanding swap memory and QOS You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement. For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed. Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure , resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event. Important If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure. 5.11.4. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output vm.overcommit_memory = 1 USD sysctl -a |grep panic Example output vm.panic_on_oom = 0 Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 5.11.5. Disabling or enforcing CPU limits using CPU CFS quotas Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel. If you disable CPU limit enforcement, it is important to understand the impact on your node: If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel. If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel. If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: small-pods 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a disabling CPU limits apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: cpuCfsQuota: 3 - "false" 1 Assign a name to CR. 2 Specify the label to apply the configuration change. 3 Set the cpuCfsQuota parameter to false . 5.11.6. Reserving resources for system processes To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory. Procedure To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes. 5.11.7. Disabling overcommitment for a node When enabled, overcommitment can be disabled on each node. Procedure To disable overcommitment in a node run the following command on that node: USD sysctl -w vm.overcommit_memory=0 5.12. Project-level limits To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed. For information on project-level resource limits, see Additional resources. Alternatively, you can disable overcommitment for specific projects. 5.12.1. Disabling overcommitment for a project When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment. Procedure To disable overcommitment in a project: Edit the project object file Add the following annotation: quota.openshift.io/cluster-resource-override-enabled: "false" Create the project object: USD oc create -f <file-name>.yaml 5.13. Freeing node resources using garbage collection Understand and use garbage collection. 5.13.1. Understanding how terminated containers are removed through garbage collection Container garbage collection can be performed using eviction thresholds. When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs . eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action. The following table lists the eviction thresholds: Table 5.3. Variables for configuring container garbage collection Node condition Eviction signal Description MemoryPressure memory.available The available memory on the node. DiskPressure nodefs.available nodefs.inodesFree imagefs.available imagefs.inodesFree The available disk space or inodes on the node root file system, nodefs , or image file system, imagefs . Note For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false . As a consequence, the scheduler could make poor scheduling decisions. To protect against this oscillation, use the eviction-pressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false. 5.13.2. Understanding how images are removed through garbage collection Image garbage collection relies on disk usage as reported by cAdvisor on the node to decide which images to remove from the node. The policy for image garbage collection is based on two conditions: The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85 . The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80 . For image garbage collection, you can modify any of the following variables using a custom resource. Table 5.4. Variables for configuring image garbage collection Setting Description imageMinimumGCAge The minimum age for an unused image before the image is removed by garbage collection. The default is 2m . imageGCHighThresholdPercent The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85 . imageGCLowThresholdPercent The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80 . Two lists of images are retrieved in each garbage collector run: A list of images currently running in at least one pod. A list of images available on a host. As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the spins. All images are then sorted by the time stamp. Once the collection starts, the oldest images get deleted first until the stopping criterion is met. 5.13.3. Configuring garbage collection for containers and images As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig object for each machine config pool. Note OpenShift Container Platform supports only one kubeletConfig object for each machine config pool. You can configure any combination of the following: Soft eviction for containers Hard eviction for containers Eviction for images Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output Name: worker Namespace: Labels: custom-kubelet=small-pods 1 1 If a label has been added it appears under Labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Important If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction. Sample configuration for a container garbage collection CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: evictionSoft: 3 memory.available: "500Mi" 4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: 5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: 6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 1 Name for the object. 2 Selector label. 3 Type of eviction: evictionSoft or evictionHard . 4 Eviction thresholds based on a specific eviction trigger signal. 5 Grace periods for the soft eviction. This parameter does not apply to eviction-hard . 6 Eviction thresholds based on a specific eviction trigger signal. For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. 7 The duration to wait before transitioning out of an eviction pressure condition. 8 The minimum age for an unused image before the image is removed by garbage collection. 9 The percent of disk usage (expressed as an integer) that triggers image garbage collection. 10 The percent of disk usage (expressed as an integer) that image garbage collection attempts to free. Create the object: USD oc create -f <file-name>.yaml For example: USD oc create -f gc-container.yaml Example output kubeletconfig.machineconfiguration.openshift.io/gc-container created Verify that garbage collection is active. The Machine Config Pool you specified in the custom resource appears with UPDATING as 'true` until the change is fully implemented: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True 5.14. Using the Node Tuning Operator Understand and use the Node Tuning Operator. The Node Tuning Operator helps you manage node-level tuning by orchestrating the Tuned daemon. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized Tuned daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized Tuned daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized Tuned daemon are rolled back on an event that triggers a profile change or when the containerized Tuned daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. 5.14.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run: USD oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality might be deprecated in future versions of the Node Tuning Operator. 5.14.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of Tuned profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized Tuned daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists Tuned profiles and their names. profile: - name: tuned_profile_1 data: | # Tuned profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other Tuned daemon plugins supported by the containerized Tuned # ... - name: tuned_profile_n data: | # Tuned profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in Tuned operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized Tuned daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized Tuned daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized Tuned pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. 5.14.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - name: "openshift" data: | [main] summary=Optimize systems running OpenShift (parent profile) include=USD{f:virt_check:virtual-guest:throughput-performance} [selinux] avc_cache_threshold=8192 [net] nf_conntrack_hashsize=131072 [sysctl] net.ipv4.ip_forward=1 kernel.pid_max=>4194304 net.netfilter.nf_conntrack_max=1048576 net.ipv4.conf.all.arp_announce=2 net.ipv4.neigh.default.gc_thresh1=8192 net.ipv4.neigh.default.gc_thresh2=32768 net.ipv4.neigh.default.gc_thresh3=65536 net.ipv6.neigh.default.gc_thresh1=8192 net.ipv6.neigh.default.gc_thresh2=32768 net.ipv6.neigh.default.gc_thresh3=65536 vm.max_map_count=262144 [sysfs] /sys/module/nvme_core/parameters/io_timeout=4294967295 /sys/module/nvme_core/parameters/max_retries=10 - name: "openshift-control-plane" data: | [main] summary=Optimize systems running OpenShift control plane include=openshift [sysctl] # ktune sysctl settings, maximizing i/o throughput # # Minimal preemption granularity for CPU-bound tasks: # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) kernel.sched_min_granularity_ns=10000000 # The total time the scheduler will consider a migrated process # "cache hot" and thus less likely to be re-migrated # (system default is 500000, i.e. 0.5 ms) kernel.sched_migration_cost_ns=5000000 # SCHED_OTHER wake-up granularity. # # Preemption granularity when tasks wake up. Lower the value to # improve wake-up latency and throughput for latency critical tasks. kernel.sched_wakeup_granularity_ns=4000000 - name: "openshift-node" data: | [main] summary=Optimize systems running OpenShift nodes include=openshift [sysctl] net.ipv4.tcp_fastopen=3 fs.inotify.max_user_watches=65536 fs.inotify.max_user_instances=8192 recommend: - profile: "openshift-control-plane" priority: 30 match: - label: "node-role.kubernetes.io/master" - label: "node-role.kubernetes.io/infra" - profile: "openshift-node" priority: 40 5.14.4. Supported Tuned daemon plug-ins Excluding the [main] section, the following Tuned plug-ins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm There is some dynamic tuning functionality provided by some of these plug-ins that is not supported. The following Tuned plug-ins are currently not supported: bootloader script systemd See Available Tuned Plug-ins and Getting Started with Tuned for more information. 5.15. Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . If you use both options, the lower of the two limits the number of pods on a node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: small-pods 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a max-pods CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 1 Assign a name to CR. 2 Specify the label to apply the configuration change. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Note Setting podsPerCore to 0 disables this limit. In the above example, the default value for podsPerCore is 10 and the default value for maxPods is 250 . This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor. List the MachineConfigPool CRDs to see if the change is applied. The UPDATING column reports True if the change is picked up by the Machine Config Controller: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False Once the change is complete, the UPDATED column reports True . USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False | [
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-ansible-2.9-rpms\" --enable=\"rhel-7-server-ose-4.7-rpms\"",
"yum install openshift-ansible openshift-clients jq",
"subscription-manager register --username=<user_name> --password=<password>",
"subscription-manager refresh",
"subscription-manager list --available --matches '*OpenShift*'",
"subscription-manager attach --pool=<pool_id>",
"subscription-manager repos --disable=\"*\"",
"yum repolist",
"yum-config-manager --disable <repo_id>",
"yum-config-manager --disable \\*",
"subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-fast-datapath-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-optional-rpms\" --enable=\"rhel-7-server-ose-4.7-rpms\"",
"systemctl disable --now firewalld.service",
"[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel7-0.example.com mycluster-rhel7-1.example.com",
"cd /usr/share/ansible/openshift-ansible",
"ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1",
"oc get nodes -o wide",
"oc adm cordon <node_name> 1",
"oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1",
"oc delete nodes <node_name> 1",
"oc get nodes -o wide",
"coreos.inst.install_dev=sda 1 coreos.inst.ignition_url=http://example.com/worker.ign 2",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 1 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 2",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8",
"oc apply -f healthcheck.yml",
"oc get machinesets -n openshift-machine-api",
"oc get machine -n openshift-machine-api",
"oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/cluster-api-delete-machine=\"true\"",
"oc adm cordon <node_name> oc adm drain <node_name>",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"oc get machines",
"kubeletConfig: podsPerCore: 10",
"kubeletConfig: maxPods: 250",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc get mc | grep kubelet",
"99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-max-pods 1",
"oc label machineconfigpool worker custom-kubelet=set-max-pods",
"oc get machineconfig",
"oc describe node <node_name>",
"oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94",
"Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods 1 kubeletConfig: maxPods: 500 2",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>",
"oc label machineconfigpool worker custom-kubelet=large-pods",
"oc create -f change-maxPods-cr.yaml",
"oc get kubeletconfig",
"NAME AGE set-max-pods 15m",
"oc describe node <node_name>",
"Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1",
"oc get kubeletconfigs set-max-pods -o yaml",
"spec: kubeletConfig: maxPods: 500 machineConfigPoolSelector: matchLabels: custom-kubelet: set-max-pods status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success",
"oc edit machineconfigpool worker",
"spec: maxUnavailable: <node_count>",
"oc label node perf-node.example.com cpumanager=true",
"oc edit machineconfigpool worker",
"metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"oc create -f cpumanager-kubeletconfig.yaml",
"oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7",
"\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]",
"oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager",
"cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2",
"cat cpumanager-pod.yaml",
"apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"",
"oc create -f cpumanager-pod.yaml",
"oc describe pod cpumanager",
"Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true",
"├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause",
"cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done",
"cpuset.cpus 1 tasks 32706",
"grep ^Cpus_allowed_list /proc/32706/status",
"Cpus_allowed_list: 1",
"cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com",
"Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)",
"NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s",
"apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages",
"oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages",
"oc create -f hugepages-tuned-boottime.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"",
"oc create -f hugepages-mcp.yaml",
"oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi",
"service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as reseting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }",
"oc describe machineconfig <name>",
"oc describe machineconfig 00-worker",
"Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3",
"oc create -f devicemgr.yaml",
"kubeletconfig.machineconfiguration.openshift.io/devicemgr created",
"spec: . template: . spec: taints: - effect: NoExecute key: key1 value: value1 .",
"spec: . template: . spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 .",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master",
"spec: . template: . spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600",
"oc adm taint nodes node1 key1=value1:NoSchedule",
"oc adm taint nodes node1 key1=value1:NoExecute",
"oc adm taint nodes node1 key2=value2:NoSchedule",
"spec: . template: . spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\"",
"spec: . template: . spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300",
"spec: . template: . spec: tolerations: - operator: \"Exists\"",
"spec: . template: . spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2",
"spec: . template: . spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 key1=value1:NoExecute",
"apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master",
"spec: . template: . spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2",
"spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600",
"oc edit machineset <machineset>",
"spec: . template: . spec: taints: - effect: NoExecute key: key1 value: value1 .",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc adm taint nodes node1 dedicated=groupName:NoSchedule",
"spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600",
"oc adm taint nodes <node-name> disktype=ssd:NoSchedule",
"oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule",
"oc adm taint nodes <node-name> <key>-",
"oc adm taint nodes ip-10-0-132-248.ec2.internal key1-",
"node/ip-10-0-132-248.ec2.internal untainted",
"spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600",
"oc edit KubeletConfig cpumanager-enabled",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2",
"spec: containers: - name: nginx image: nginx",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"",
"spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: v1 kind: Namespace metadata: . labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" .",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: . mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .",
"apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-namespace.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator",
"oc create -f <file-name>.yaml",
"oc create -f cro-og.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"4.7\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f <file-name>.yaml",
"oc create -f cro-sub.yaml",
"oc project clusterresourceoverride-operator",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4",
"oc create -f <file-name>.yaml",
"oc create -f cro-cr.yaml",
"oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: . mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 .",
"apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3",
"apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1",
"sysctl -a |grep commit",
"vm.overcommit_memory = 1",
"sysctl -a |grep panic",
"vm.panic_on_oom = 0",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: small-pods 1",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: cpuCfsQuota: 3 - \"false\"",
"sysctl -w vm.overcommit_memory=0",
"quota.openshift.io/cluster-resource-override-enabled: \"false\"",
"oc create -f <file-name>.yaml",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"Name: worker Namespace: Labels: custom-kubelet=small-pods 1",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 0s 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10",
"oc create -f <file-name>.yaml",
"oc create -f gc-container.yaml",
"kubeletconfig.machineconfiguration.openshift.io/gc-container created",
"oc get machineconfigpool",
"NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True",
"oc get Tuned/default -o yaml -n openshift-cluster-node-tuning-operator",
"profile: - name: tuned_profile_1 data: | # Tuned profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other Tuned daemon plugins supported by the containerized Tuned - name: tuned_profile_n data: | # Tuned profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings",
"recommend: <recommend-item-1> <recommend-item-n>",
"- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8",
"- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4",
"- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom",
"apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - name: \"openshift\" data: | [main] summary=Optimize systems running OpenShift (parent profile) include=USD{f:virt_check:virtual-guest:throughput-performance} [selinux] avc_cache_threshold=8192 [net] nf_conntrack_hashsize=131072 [sysctl] net.ipv4.ip_forward=1 kernel.pid_max=>4194304 net.netfilter.nf_conntrack_max=1048576 net.ipv4.conf.all.arp_announce=2 net.ipv4.neigh.default.gc_thresh1=8192 net.ipv4.neigh.default.gc_thresh2=32768 net.ipv4.neigh.default.gc_thresh3=65536 net.ipv6.neigh.default.gc_thresh1=8192 net.ipv6.neigh.default.gc_thresh2=32768 net.ipv6.neigh.default.gc_thresh3=65536 vm.max_map_count=262144 [sysfs] /sys/module/nvme_core/parameters/io_timeout=4294967295 /sys/module/nvme_core/parameters/max_retries=10 - name: \"openshift-control-plane\" data: | [main] summary=Optimize systems running OpenShift control plane include=openshift [sysctl] # ktune sysctl settings, maximizing i/o throughput # # Minimal preemption granularity for CPU-bound tasks: # (default: 1 msec# (1 + ilog(ncpus)), units: nanoseconds) kernel.sched_min_granularity_ns=10000000 # The total time the scheduler will consider a migrated process # \"cache hot\" and thus less likely to be re-migrated # (system default is 500000, i.e. 0.5 ms) kernel.sched_migration_cost_ns=5000000 # SCHED_OTHER wake-up granularity. # # Preemption granularity when tasks wake up. Lower the value to # improve wake-up latency and throughput for latency critical tasks. kernel.sched_wakeup_granularity_ns=4000000 - name: \"openshift-node\" data: | [main] summary=Optimize systems running OpenShift nodes include=openshift [sysctl] net.ipv4.tcp_fastopen=3 fs.inotify.max_user_watches=65536 fs.inotify.max_user_instances=8192 recommend: - profile: \"openshift-control-plane\" priority: 30 match: - label: \"node-role.kubernetes.io/master\" - label: \"node-role.kubernetes.io/infra\" - profile: \"openshift-node\" priority: 40",
"oc describe machineconfigpool <name>",
"oc describe machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: small-pods 1",
"oc label machineconfigpool worker custom-kubelet=small-pods",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: small-pods 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False",
"oc get machineconfigpools",
"NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/post-installation_configuration/post-install-node-tasks |
18.12.10.10. TCP/UDP/SCTP over IPV6 | 18.12.10.10. TCP/UDP/SCTP over IPV6 Protocol ID: tcp-ipv6, udp-ipv6, sctp-ipv6 The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 18.12. TCP, UDP, SCTP over IPv6 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address srcportstart UINT16 Start of range of valid source ports srcportend UINT16 End of range of valid source ports dstportstart UINT16 Start of range of valid destination ports dstportend UINT16 End of range of valid destination ports comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-tcp-utp-sctp-over-ipv6 |
Chapter 2. New features and enhancements | Chapter 2. New features and enhancements A list of all major enhancements, and new features introduced in this release of Red Hat Trusted Artifact Signer (RHTAS). The features and enhancements added by this release are: Log sharding for Rekor If left alone, Rekor's log will grow indefinitely, which can impact overall performance. With this release, we added log sharding for Rekor to help manage scaling, and minimizing any potential performance degradation from having large logs. You can configure sharding by directly modifying the Rekor custom resource (CR). For more information about how to configure log sharding for Rekor's signer key rotation, see the RHTAS Administration Guide . Log sharding for CT log If left alone, Certificate Transparency (CT) log will grow indefinitely, which can impact overall performance. With this release, we added log sharding for CT log to help manage scaling, and minimizing any potential performance degradation from having large logs. You can configure sharding by directly modifying the CT log custom resource (CR). For more information about how to configure log sharding for CT log's signer key rotation, see the RHTAS Administration Guide . Deploy Trillian independently With this release, you can deploy the Trillian service independently of all other RHTAS components. You can now deploy an independent version of Trillian that uses the RHTAS operator. Deploy Rekor independently from Trillian In earlier releases of RHTAS, Rekor required the Trillian service, along with the Trillian database, to be running in the same namespace as Rekor. Because of this dependency, deploying Rekor in complex or segmented environments was more challenging. With this release, we made Rekor independent from Trillian, giving users the flexibility to implement Trillian in a way that is more adaptable to complex infrastructure configurations. Because of this new feature, we extended the API, which allows you to specify connection information for the Trillian service. You can specify the Trillian connection information by providing the appropriate values to the spec.trillian.host and spec.trillian.port options in the Securesign resource. Proxy support for the Trusted Artifact Signer operator Connections are often established by using proxies in OpenShift environments, and this might be a hard requirement for some organizations. With this release, we added support for configured proxies in OpenShift environments to the RHTAS operator and operands. Trusted Timestamp Authority support added By default, the timestamp comes from Rekor's own internal clock, which is not externally verifiable or immutable. By using signed timestamps from trusted Timestamp Authorities (TSAs) this mitigates the risk of Rekor's internal clock being tampered with. With this release, you can configure a trusted TSA instead of using Rekor's internal clock. Support for custom Rekor UI route for Ingress sharding With this release, you can set a custom route for the Rekor user interface (UI) to work with OpenShift's Ingress Controller sharding feature. You can configure this by modifying the externalAccess section of ingress and route resources, adding the type: dev label under the routeSelectorlabels section. For example: ... externalAccess: enabled: true routeSelectorLabels: type: dev ... This allows the Ingress Controller to identify these resources for specific preset routes, in this case the dev route. The operator supports custom CA bundles with certificate injection With this release, the RHTAS operator now supports custom Certificate Authority (CA) bundles by using certificate injection. To ensure secure communications with OpenShift Proxy or other services needing to trust a specific CA, the RHTAS operator automatically injects trusted CA bundles into its managed services. These managed services are: Trillian, Fulcio, Rekor, Certificate Transparency (CT) log, and Timestamp Authority (TSA). You can trust additional CA bundles by referencing the config map containing the custom CA bundle in one of two ways: In the relevant custom resource (CR) file, under the metadata.annotations section, add rhtas.redhat.com/trusted-ca . Configure a custom CA bundle directly in the CR file by adding the trustedCA field in the spec . Configure a CT log prefix for Fulcio With this release, we added the ability to configure a Certificate Transparency (CT) log prefix for Fulcio. In earlier releases, we hard-coded the prefix to trusted-artifact-signer . Making the prefix configurable, gives you more flexibility, and allows you to target specific CT logs within the CT service. The Fulcio custom resource definition (CRD) has a new spec.ctlog.prefix field, where you can set the prefix. Enterprise Contract can initialize the TUF root With this release, you can now use the ec sigstore initialize --root USD{TUF_URL} command to initialize Enterprise Contract with The Update Framework (TUF) root deployed by RHTAS. Doing this initialization stores the trusted metadata locally in USDHOME/.sigstore/root . Support for excluding rules for specific images in an Enterprise Contract policy With this release, you can add an exclude directive in the volatileConfig section of an Enterprise Contract (EC) policy for a specific image digest. You can specify an image digest by using the imageRef key, which limits the policy exception to one specific image. Support for organizational level OCI registry authentication With this release, Enterprise Contract (EC) supports Open Container Initiative (OCI) registry credentials specified by using a subpath of the full repository path. If many matching credentials are available, then it tries them in order of specificity. For more information, see the authentication against container image registries specification . Improved the auditing of Enterprise Contract policy sources With this release, we log an entry for a Git SHA, or a bundle image digest for each policy source. This allows for better auditing of Enterprise Contract (EC) results, showing you the exact version of the policies and policy data used by EC, allowing for reproducibility. Displaying plain text as the default for Enterprise Contract reports With this release, we changed the default output format for the Enterprise Contract (EC) report to plain text. The plain text format makes reading the EC results report much easier. | [
"externalAccess: enabled: true routeSelectorLabels: type: dev"
] | https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1/html/release_notes/enhancements |
A.18. Troubleshooting SR-IOV | A.18. Troubleshooting SR-IOV This section contains solutions for problems which may affect SR-IOV. If you need additional help, see Section 16.2.4, "Setting PCI device assignment from a pool of SR-IOV virtual functions" . Error starting the guest When starting a configured virtual machine, an error occurs as follows: This error is often caused by a device that is already assigned to another guest or to the host itself. Error migrating, saving, or dumping the guest Attempts to migrate and dump the virtual machine cause an error similar to the following: Because device assignment uses hardware on the specific host where the virtual machine was started, guest migration and save are not supported when device assignment is in use. Currently, the same limitation also applies to core-dumping a guest; this may change in the future. It is important to note that QEMU does not currently support migrate, save, and dump operations on guest virtual machines with PCI devices attached, unless the --memory-only option is specified. Currently, it only can support these actions with USB devices. Work is currently being done to improve this in the future. | [
"virsh start test error: Failed to start domain test error: Requested operation is not valid: PCI device 0000:03:10.1 is in use by domain rhel7",
"virsh dump rhel7/tmp/rhel7.dump error: Failed to core dump domain rhel7 to /tmp/rhel7.dump error: internal error: unable to execute QEMU command 'migrate': State blocked by non-migratable device '0000:00:03.0/vfio-pci'"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-sr_iov-troubleshooting_sr_iov |
8.6. Configuring Redundant Ring Protocol | 8.6. Configuring Redundant Ring Protocol As of Red Hat Enterprise Linux 6.4, the Red Hat High Availability Add-On supports the configuration of redundant ring protocol. When configuring a system to use redundant ring protocol, you must take the following considerations into account: Do not specify more than two rings. Each ring must use the same protocol; do not mix IPv4 and IPv6. If necessary, you can manually specify a multicast address for the second ring. If you specify a multicast address for the second ring, either the alternate multicast address or the alternate port must be different from the multicast address for the first ring. If you do not specify an alternate multicast address, the system will automatically use a different multicast address for the second ring. If you specify an alternate port, the port numbers of the first ring and the second ring must differ by at least two, since the system itself uses port and port-1 to perform operations. Do not use two different interfaces on the same subnet. In general, it is a good practice to configure redundant ring protocol on two different NICs and two different switches, in case one NIC or one switch fails. Do not use the ifdown command or the service network stop command to simulate network failure. This destroys the whole cluster and requires that you restart all of the nodes in the cluster to recover. Do not use NetworkManager , since it will execute the ifdown command if the cable is unplugged. When one node of a NIC fails, the entire ring is marked as failed. No manual intervention is required to recover a failed ring. To recover, you only need to fix the original reason for the failure, such as a failed NIC or switch. To specify a second network interface to use for redundant ring protocol, you add an altname component to the clusternode section of the cluster.conf configuration file. When specifying altname , you must specify a name attribute to indicate a second host name or IP address for the node. The following example specifies clusternet-node1-eth2 as the alternate name for cluster node clusternet-node1-eth1 . The altname section within the clusternode block is not position dependent. It can come before or after the fence section. Do not specify more than one altname component for a cluster node or the system will fail to start. Optionally, you can manually specify a multicast address, a port, and a TTL for the second ring by including an altmulticast component in the cman section of the cluster.conf configuration file. The altmulticast component accepts an addr , a port , and a ttl parameter. The following example shows the cman section of a cluster configuration file that sets a multicast address, port, and TTL for the second ring. | [
"<cluster name=\"mycluster\" config_version=\"3\" > <logging debug=\"on\"/> <clusternodes> <clusternode name=\"clusternet-node1-eth1\" votes=\"1\" nodeid=\"1\"> <fence> <method name=\"single\"> <device name=\"xvm\" domain=\"clusternet-node1\"/> </method> </fence> <altname name=\"clusternet-node1-eth2\"/> </clusternode>",
"<cman> <multicast addr=\"239.192.99.73\" port=\"666\" ttl=\"2\"/> <altmulticast addr=\"239.192.99.88\" port=\"888\" ttl=\"3\"/> </cman>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-rrp-cli-ca |
Chapter 8. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation | Chapter 8. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 8.3, "Manual creation of infrastructure nodes" section for more information. 8.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: https://access.redhat.com/solutions/6592171 . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 8.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 8.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 8.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <nodes.openshift.ocs.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere . | [
"spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"",
"adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule",
"Taints: Key: nodes.openshift.ocs.io/storage Value: true Effect: Noschedule"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/how-to-use-dedicated-worker-nodes-for-openshift-data-foundation_osp |
Appendix B. Importing Kickstart Repositories | Appendix B. Importing Kickstart Repositories Kickstart repositories are not provided by the Content ISO image. To use Kickstart repositories in your disconnected Satellite, you must download a binary DVD ISO file for the version of Red Hat Enterprise Linux that you want to use and copy the Kickstart files to Satellite. To import Kickstart repositories for Red Hat Enterprise Linux 7, complete Section B.1, "Importing Kickstart Repositories for Red Hat Enterprise Linux 7" . To import Kickstart repositories for Red Hat Enterprise Linux 8, complete Section B.2, "Importing Kickstart Repositories for Red Hat Enterprise Linux 8" . B.1. Importing Kickstart Repositories for Red Hat Enterprise Linux 7 To import Kickstart repositories for Red Hat Enterprise Linux 7, complete the following steps on Satellite. Procedure Navigate to the Red Hat Customer Portal at access.redhat.com and log in. In the upper left of the window, click Downloads . To the right of Red Hat Enterprise Linux 7 , click Versions 7 and below . From the Version list, select the required version of the Red Hat Enterprise Linux 7, for example 7.7. In the Download Red Hat Enterprise Linux window, locate the binary DVD version of the ISO image, for example, Red Hat Enterprise Linux 7.7 Binary DVD , and click Download Now . When the download completes, copy the ISO image to Satellite Server. On Satellite Server, create a mount point and temporarily mount the ISO image at that location: Create Kickstart directories: Copy the kickstart files from the ISO image: Add the following entries to the listing files: To the /var/www/html/pub/sat-import/content/dist/rhel/server/7/listing file, append the version number with a new line. For example, for the RHEL 7.7 ISO, append 7.7 . To the /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/listing file, append the architecture with a new line. For example, x86_64 . To the /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/x86_64/listing file, append kickstart with a new line. Copy the .treeinfo files from the ISO image: If you do not plan to use the mounted binary DVD ISO image, unmount and remove the directory: In the Satellite web UI, enable the Kickstart repositories. B.2. Importing Kickstart Repositories for Red Hat Enterprise Linux 8 To import Kickstart repositories for Red Hat Enterprise Linux 8, complete the following steps on Satellite. Procedure Navigate to the Red Hat Customer Portal at access.redhat.com and log in. In the upper left of the window, click Downloads . Click Red Hat Enterprise Linux 8 . In the Download Red Hat Enterprise Linux window, locate the binary DVD version of the ISO image, for example, Red Hat Enterprise Linux 8.1 Binary DVD , and click Download Now . When the download completes, copy the ISO image to Satellite Server. On Satellite Server, create a mount point and temporarily mount the ISO image at that location: Create directories for Red Hat Enterprise Linux 8 AppStream and BaseOS Kickstart repositories: Copy the kickstart files from the ISO image: Note that for BaseOS, you must also copy the contents of the /mnt/ iso /images/ directory. Add the following entries to the listing files: To the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/listing file, append kickstart with a new line. To the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/listing file, append kickstart with a new line: To the /var/www/html/pub/sat-import/content/dist/rhel8/listing file, append the version number with a new line. For example, for the RHEL 8.1 binary ISO, append 8.1 . Copy the .treeinfo files from the ISO image: Open the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart/treeinfo file for editing. In the [general] section, make the following changes: Change packagedir = AppStream/Packages to packagedir = Packages Change repository = AppStream to repository = . Change variant = AppStream to variant = BaseOS Change variants = AppStream,BaseOS to variants = BaseOS In the [tree] section, change variants = AppStream,BaseOS to variants = BaseOS . In the [variant-BaseOS] section, make the following changes: Change packages = BaseOS/Packages to packages = Packages Change repository = BaseOS to repository = . Delete the [media] and [variant-AppStream] sections. Save and close the file. Verify that the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart/treeinfo file has the following format: Open the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart/treeinfo file for editing. In the [general] section, make the following changes: Change packagedir = AppStream/Packages to packagedir = Packages Change repository = AppStream to repository = . Change variants = AppStream,BaseOS to variants = AppStream In the [tree] section, change variants = AppStream,BaseOS to variants = AppStream In the [variant-AppStream] section, make the following changes: Change packages = AppStream/Packages to packages = Packages Change repository = AppStream to repository = . Delete the following sections from the file: [checksums] , [images-x86_64] , [images-xen] , [media] , [stage2] , [variant-BaseOS] . Save and close the file. Verify that the /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart/treeinfo file has the following format: If you do not plan to use the mounted binary DVD ISO image, unmount and remove the directory: In the Satellite web UI, enable the Kickstart repositories. | [
"mkdir /mnt/ iso mount -o loop rhel-binary-dvd.iso /mnt/ iso",
"mkdir --parents /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/x86_64/kickstart/",
"cp -a /mnt/ iso /* /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/x86_64/kickstart/",
"cp /mnt/ iso /.treeinfo /var/www/html/pub/sat-import/content/dist/rhel/server/7/7.7/x86_64/kickstart/treeinfo",
"umount /mnt/ iso rmdir /mnt/ iso",
"mkdir /mnt/ iso mount -o loop rhel-binary-dvd.iso /mnt/ iso",
"mkdir --parents /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart mkdir --parents /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart",
"cp -a /mnt/ iso /AppStream/* /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart cp -a /mnt/ iso /BaseOS/* /mnt/ iso /images/ /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart",
"cp /mnt/ iso /.treeinfo /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/appstream/kickstart/treeinfo cp /mnt/ iso /.treeinfo /var/www/html/pub/sat-import/content/dist/rhel8/8.1/x86_64/baseos/kickstart/treeinfo",
"[checksums] images/efiboot.img = sha256:9ad9beee4c906cd05d227a1be7a499c8d2f20b3891c79831325844c845262bb6 images/install.img = sha256:e246bf4aedfff3bb54ae9012f959597cdab7387aadb3a504f841bdc2c35fe75e images/pxeboot/initrd.img = sha256:a66e3c158f02840b19c372136a522177a2ab4bd91cb7269fb5bfdaaf7452efef images/pxeboot/vmlinuz = sha256:789028335b64ddad343f61f2abfdc9819ed8e9dfad4df43a2694c0a0ba780d16 [general] ; WARNING.0 = This section provides compatibility with pre-productmd treeinfos. ; WARNING.1 = Read productmd documentation for details about new format. arch = x86_64 family = Red Hat Enterprise Linux name = Red Hat Enterprise Linux 8.1.0 packagedir = Packages platforms = x86_64,xen repository = . timestamp = 1571146127 variant = BaseOS variants = BaseOS version = 8.1.0 [header] type = productmd.treeinfo version = 1.2 [images-x86_64] efiboot.img = images/efiboot.img initrd = images/pxeboot/initrd.img kernel = images/pxeboot/vmlinuz [images-xen] initrd = images/pxeboot/initrd.img kernel = images/pxeboot/vmlinuz [release] name = Red Hat Enterprise Linux short = RHEL version = 8.1.0 [stage2] mainimage = images/install.img [tree] arch = x86_64 build_timestamp = 1571146127 platforms = x86_64,xen variants = BaseOS [variant-BaseOS] id = BaseOS name = BaseOS packages = Packages repository = . type = variant uid = BaseOS",
"[general] ; WARNING.0 = This section provides compatibility with pre-productmd treeinfos. ; WARNING.1 = Read productmd documentation for details about new format. arch = x86_64 family = Red Hat Enterprise Linux name = Red Hat Enterprise Linux 8.1.0 packagedir = Packages platforms = x86_64,xen repository = . timestamp = 1571146127 variant = AppStream variants = AppStream version = 8.1.0 [header] type = productmd.treeinfo version = 1.2 [release] name = Red Hat Enterprise Linux short = RHEL version = 8.1.0 [tree] arch = x86_64 build_timestamp = 1571146127 platforms = x86_64,xen variants = AppStream [variant-AppStream] id = AppStream name = AppStream packages = Packages repository = . type = variant uid = AppStream",
"umount /mnt/ iso rmdir /mnt/ iso"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/importing_kickstart_repositories_content-management |
13.6. Welcome Screen and Language Selection | 13.6. Welcome Screen and Language Selection The first screen of the installation program is the Welcome to Red Hat Enterprise Linux screen. Here you select the language that Anaconda will use for the rest of the installation. This selection will also become the default for the installated system, unless changed later. In the left panel, select your language of choice, for example English . Then you can select a locale specific to your region in the right panel, for example English (United States) . Note One language is pre-selected by default on top of the list. If network access is configured at this point (for example, if you booted from a network server instead of local media), the pre-selected language will be determined based on automatic location detection using the GeoIP module. Alternatively, type your preferred language into the search box as shown below. Once you have made your selection, click the Continue button to proceed to the Installation Summary screen. Figure 13.3. Language Configuration After you click the Continue button, the unsupported hardware dialog may appear. This happens if you are using hardware that the kernel does not support. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-language-selection-ppc |
4.3. Clustering and High Availability | 4.3. Clustering and High Availability luci support for fence_sanlock The luci tool now supports the sanlock fence agent as a Technology Preview. The agent is available in the luci's list of agents. Package: luci-0.26.0-48 Recovering a node via a hardware watchdog device New fence_sanlock agent and checkquorum.wdmd, included in Red Hat Enterprise Linux 6.4 as a Technology Preview, provide new mechanisms to trigger the recovery of a node via a hardware watchdog device. Tutorials on how to enable this Technology Preview will be available at https://fedorahosted.org/cluster/wiki/HomePage Note that SELinux in enforcing mode is currently not supported. Package: cluster-3.0.12.1-59 keepalived The keepalived package has been included as a Technology Preview, starting with Red Hat Enterprise Linux 6.4. The keepalived package provides simple and robust facilities for load-balancing and high-availability. The load-balancing framework relies on the well-know and widely used Linux Virtual Server kernel module providing Layer4 network load-balancing. The keepalived daemon implements a set of health checkers to load-balanced server pools according to their state. The keepalived daemon also implements the Virtual Router Redundancy Protocol (VRRP), allowing router or director failover to achieve high availability. Package: keepalived-1.2.7-3 HAProxy HAProxy is a stand-alone, layer-7, high-performance network load balancer for TCP and HTTP-based applications which can perform various types of scheduling based on the content of the HTTP requests. The haproxy package is included as a Technology Preview, starting with Red Hat Enterprise Linux 6.4. Package: haproxy-1.4.24-2 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/clustering_tp |
Object Gateway Guide | Object Gateway Guide Red Hat Ceph Storage 8 Deploying, configuring, and administering a Ceph Object Gateway Red Hat Ceph Storage Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/object_gateway_guide/index |
function::kernel_string_quoted_utf16 | function::kernel_string_quoted_utf16 Name function::kernel_string_quoted_utf16 - Quote given kernel UTF-16 string. Synopsis Arguments addr The kernel address to retrieve the string from Description This function combines quoting as per string_quoted and UTF-16 decoding as per kernel_string_utf16 . | [
"kernel_string_quoted_utf16:string(addr:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-kernel-string-quoted-utf16 |
Chapter 7. The ceph-volume utility | Chapter 7. The ceph-volume utility As a storage administrator, you can prepare, list, create, activate, deactivate, batch, trigger, zap, and migrate Ceph OSDs using the ceph-volume utility. The ceph-volume utility is a single-purpose command-line tool to deploy logical volumes as OSDs. It uses a plugin-type framework to deploy OSDs with different device technologies. The ceph-volume utility follows a similar workflow of the ceph-disk utility for deploying OSDs, with a predictable, and robust way of preparing, activating, and starting OSDs. Currently, the ceph-volume utility only supports the lvm plugin, with the plan to support others technologies in the future. Important The ceph-disk command is deprecated. 7.1. Prerequisites A running Red Hat Ceph Storage cluster. 7.2. Ceph volume lvm plugin By making use of LVM tags, the lvm sub-command is able to store and re-discover by querying devices associated with OSDs so they can be activated. This includes support for lvm-based technologies like dm-cache as well. When using ceph-volume , the use of dm-cache is transparent, and treats dm-cache like a logical volume. The performance gains and losses when using dm-cache will depend on the specific workload. Generally, random and sequential reads will see an increase in performance at smaller block sizes. While random and sequential writes will see a decrease in performance at larger block sizes. To use the LVM plugin, add lvm as a subcommand to the ceph-volume command within the cephadm shell: Following are the lvm subcommands: prepare - Format an LVM device and associate it with an OSD. activate - Discover and mount the LVM device associated with an OSD ID and start the Ceph OSD. list - List logical volumes and devices associated with Ceph. batch - Automatically size devices for multi-OSD provisioning with minimal interaction. deactivate - Deactivate OSDs. create - Create a new OSD from an LVM device. trigger - A systemd helper to activate an OSD. zap - Removes all data and filesystems from a logical volume or partition. migrate - Migrate BlueFS data from to another LVM device. new-wal - Allocate new WAL volume for the OSD at specified logical volume. new-db - Allocate new DB volume for the OSD at specified logical volume. Note Using the create subcommand combines the prepare and activate subcommands into one subcommand. Additional Resources See the create subcommand section for more details. 7.3. Why does ceph-volume replace ceph-disk ? versions of Red Hat Ceph Storage used the ceph-disk utility to prepare, activate, and create OSDs. Starting with Red Hat Ceph Storage 5, ceph-disk is replaced by the ceph-volume utility that aims to be a single purpose command-line tool to deploy logical volumes as OSDs, while maintaining a similar API to ceph-disk when preparing, activating, and creating OSDs. How does ceph-volume work? The ceph-volume is a modular tool that currently supports two ways of provisioning hardware devices, legacy ceph-disk devices and LVM (Logical Volume Manager) devices. The ceph-volume lvm command uses the LVM tags to store information about devices specific to Ceph and its relationship with OSDs. It uses these tags to later re-discover and query devices associated with OSDS so that it can activate them. It supports technologies based on LVM and dm-cache as well. The ceph-volume utility uses dm-cache transparently and treats it as a logical volume. You might consider the performance gains and losses when using dm-cache , depending on the specific workload you are handling. Generally, the performance of random and sequential read operations increases at smaller block sizes; while the performance of random and sequential write operations decreases at larger block sizes. Using ceph-volume does not introduce any significant performance penalties. Important The ceph-disk utility is deprecated. Note The ceph-volume simple command can handle legacy ceph-disk devices, if these devices are still in use. How does ceph-disk work? The ceph-disk utility was required to support many different types of init systems, such as upstart or sysvinit , while being able to discover devices. For this reason, ceph-disk concentrates only on GUID Partition Table (GPT) partitions. Specifically on GPT GUIDs that label devices in a unique way to answer questions like: Is this device a journal ? Is this device an encrypted data partition? Was the device left partially prepared? To solve these questions, ceph-disk uses UDEV rules to match the GUIDs. What are disadvantages of using ceph-disk ? Using the UDEV rules to call ceph-disk can lead to a back-and-forth between the ceph-disk systemd unit and the ceph-disk executable. The process is very unreliable and time consuming and can cause OSDs to not come up at all during the boot process of a node. Moreover, it is hard to debug, or even replicate these problems given the asynchronous behavior of UDEV. Because ceph-disk works with GPT partitions exclusively, it cannot support other technologies, such as Logical Volume Manager (LVM) volumes, or similar device mapper devices. To ensure the GPT partitions work correctly with the device discovery workflow, ceph-disk requires a large number of special flags to be used. In addition, these partitions require devices to be exclusively owned by Ceph. 7.4. Preparing Ceph OSDs using ceph-volume The prepare subcommand prepares an OSD back-end object store and consumes logical volumes (LV) for both the OSD data and journal. It does not modify the logical volumes, except for adding some extra metadata tags using LVM. These tags make volumes easier to discover, and they also identify the volumes as part of the Ceph Storage Cluster and the roles of those volumes in the storage cluster. The BlueStore OSD backend supports the following configurations: A block device, a block.wal device, and a block.db device A block device and a block.wal device A block device and a block.db device A single block device The prepare subcommand accepts a whole device or partition, or a logical volume for block . Prerequisites Root-level access to the OSD nodes. Optionally, create logical volumes. If you provide a path to a physical device, the subcommand turns the device into a logical volume. This approach is simpler, but you cannot configure or change the way the logical volume is created. Procedure Extract the Ceph keyring: Syntax Example Prepare the LVM volumes: Syntax Example Optionally, if you want to use a separate device for RocksDB, specify the --block.db and --block.wal options: Syntax Example Optionally, to encrypt data, use the --dmcrypt flag: Syntax Example Additional Resources See the Activating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details. See the Creating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details. 7.5. Listing devices using ceph-volume You can use the ceph-volume lvm list subcommand to list logical volumes and devices associated with a Ceph cluster, as long as they contain enough metadata to allow for that discovery. The output is grouped by the OSD ID associated with the devices. For logical volumes, the devices key is populated with the physical devices associated with the logical volume. In some cases, the output of the ceph -s command shows the following error message: In such cases, you can list the devices with ceph device ls-lights command which gives the details about the lights on the devices. Based on the information, you can turn off the lights on the devices. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph OSD node. Procedure List the devices in the Ceph cluster: Example Optional: List the devices in the storage cluster with the lights: Example Optional: Turn off the lights on the device: Syntax Example 7.6. Activating Ceph OSDs using ceph-volume The activation process enables a systemd unit at boot time, which allows the correct OSD identifier and its UUID to be enabled and mounted. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph OSD node. Ceph OSDs prepared by the ceph-volume utility. Procedure Get the OSD ID and OSD FSID from an OSD node: Activate the OSD: Syntax Example To activate all OSDs that are prepared for activation, use the --all option: Example Optionally, you can use the trigger subcommand. This command cannot be used directly, and it is used by systemd so that it proxies input to ceph-volume lvm activate . This parses the metadata coming from systemd and startup, detecting the UUID and ID associated with an OSD. Syntax Here the SYSTEMD_DATA is in OSD_ID - OSD_FSID format. Example Additional Resources See the Preparing Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details. See the Creating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details. 7.7. Deactivating Ceph OSDs using ceph-volume You can deactivate the Ceph OSDs using the ceph-volume lvm subcommand. This subcommand removes the volume groups and the logical volume. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph OSD node. The Ceph OSDs are activated using the ceph-volume utility. Procedure Get the OSD ID from the OSD node: Deactivate the OSD: Syntax Example Additional Resources See the Activating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details. See the Preparing Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details. See the Creating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details. 7.8. Creating Ceph OSDs using ceph-volume The create subcommand calls the prepare subcommand, and then calls the activate subcommand. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph OSD nodes. Note If you prefer to have more control over the creation process, you can use the prepare and activate subcommands separately to create the OSD, instead of using create . You can use the two subcommands to gradually introduce new OSDs into a storage cluster, while avoiding having to rebalance large amounts of data. Both approaches work the same way, except that using the create subcommand causes the OSD to become up and in immediately after completion. Procedure To create a new OSD: Syntax Example Additional Resources See the Preparing Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details. See the Activating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details. 7.9. Migrating BlueFS data You can migrate the BlueStore file system (BlueFS) data, that is the RocksDB data, from the source volume to the target volume using the migrate LVM subcommand. The source volume, except the main one, is removed on success. LVM volumes are primarily for the target only. The new volumes are attached to the OSD, replacing one of the source drives. Following are the placement rules for the LVM volumes: If source list has DB or WAL volume, then the target device replaces it. if source list has slow volume only, then explicit allocation using the new-db or new-wal command is needed. The new-db and new-wal commands attaches the given logical volume to the given OSD as a DB or a WAL volume respectively. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph OSD node. Ceph OSDs prepared by the ceph-volume utility. Volume groups and Logical volumes are created. Procedure Log in the cephadm shell: Example Stop the OSD to which you have to add the DB or the WAL device: Example Mount the new devices to the container: Example Attach the given logical volume to OSD as a DB/WAL device: Note This command fails if the OSD has an attached DB. Syntax Example You can migrate BlueFS data in the following ways: Move BlueFS data from main device to LV that is already attached as DB: Syntax Example Move BlueFS data from shared main device to LV which shall be attached as a new DB: Syntax Example Move BlueFS data from DB device to new LV, and replace the DB device: Syntax Example Move BlueFS data from main and DB devices to new LV, and replace the DB device: Syntax Example Move BlueFS data from main, DB, and WAL devices to new LV, remove the WAL device, and replace the the DB device: Syntax Example Move BlueFS data from main, DB, and WAL devices to the main device, remove the WAL and DB devices: Syntax Example 7.10. Expanding BlueFS DB device You can expand the storage of the BlueStore File System (BlueFS) data that is the RocksDB data of ceph-volume created OSDs with the ceph-bluestore tool. Prerequisites A running Red Hat Ceph Storage cluster. Ceph OSDs are prepared by the ceph-volume utility. Volume groups and Logical volumes are created. Note Run these steps on the host where the OSD is deployed. Procedure Optional: Inside the cephadm shell, list the devices in the Red Hat Ceph Storage cluster. Example Get the volume group information: Example Stop the Ceph OSD service: Example Resize, shrink, and expand the logical volumes: Example Launch the cephadm shell: Syntax Example The ceph-bluestore-tool needs to access the BlueStore data from within the cephadm shell container, so it must be bind-mounted. Use the -m option to make the BlueStore data available. Check the size of the Rocks DB before expansion: Syntax Example Expand the BlueStore device: Syntax Example Verify the block.db is expanded: Syntax Example Exit the shell and restart the OSD: Example 7.11. Using batch mode with ceph-volume The batch subcommand automates the creation of multiple OSDs when single devices are provided. The ceph-volume command decides the best method to use to create the OSDs, based on drive type. Ceph OSD optimization depends on the available devices: If all devices are traditional hard drives, batch creates one OSD per device. If all devices are solid state drives, batch creates two OSDs per device. If there is a mix of traditional hard drives and solid state drives, batch uses the traditional hard drives for data, and creates the largest possible journal ( block.db ) on the solid state drive. Note The batch subcommand does not support the creation of a separate logical volume for the write-ahead-log ( block.wal ) device. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph OSD nodes. Procedure To create OSDs on several drives: Syntax Example Additional Resources See the Creating Ceph OSDs using `ceph-volume` section in the Red Hat Ceph Storage Administration Guide for more details. 7.12. Zapping data using ceph-volume The zap subcommand removes all data and filesystems from a logical volume or partition. You can use the zap subcommand to zap logical volumes, partitions, or raw devices that are used by Ceph OSDs for reuse. Any filesystems present on the given logical volume or partition are removed and all data is purged. Optionally, you can use the --destroy flag for complete removal of a logical volume, partition, or the physical device. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph OSD node. Procedure Zap the logical volume: Syntax Example Zap the partition: Syntax Example Zap the raw device: Syntax Example Purge multiple devices with the OSD ID: Syntax Example Note All the relative devices are zapped. Purge OSDs with the FSID: Syntax Example Note All the relative devices are zapped. | [
"ceph-volume lvm",
"ceph auth get client. ID -o ceph.client. ID .keyring",
"ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring",
"ceph-volume lvm prepare --bluestore --data VOLUME_GROUP / LOGICAL_VOLUME",
"ceph-volume lvm prepare --bluestore --data example_vg/data_lv",
"ceph-volume lvm prepare --bluestore --block.db BLOCK_DB_DEVICE --block.wal BLOCK_WAL_DEVICE --data DATA_DEVICE",
"ceph-volume lvm prepare --bluestore --block.db /dev/sda --block.wal /dev/sdb --data /dev/sdc",
"ceph-volume lvm prepare --bluestore --dmcrypt --data VOLUME_GROUP / LOGICAL_VOLUME",
"ceph-volume lvm prepare --bluestore --dmcrypt --data example_vg/data_lv",
"1 devices have fault light turned on",
"ceph-volume lvm list ====== osd.6 ======= [block] /dev/ceph-83909f70-95e9-4273-880e-5851612cbe53/osd-block-7ce687d9-07e7-4f8f-a34e-d1b0efb89920 block device /dev/ceph-83909f70-95e9-4273-880e-5851612cbe53/osd-block-7ce687d9-07e7-4f8f-a34e-d1b0efb89920 block uuid 4d7gzX-Nzxp-UUG0-bNxQ-Jacr-l0mP-IPD8cX cephx lockbox secret cluster fsid 1ca9f6a8-d036-11ec-8263-fa163ee967ad cluster name ceph crush device class None encrypted 0 osd fsid 7ce687d9-07e7-4f8f-a34e-d1b0efb89920 osd id 6 osdspec affinity all-available-devices type block vdo 0 devices /dev/vdc",
"ceph device ls-lights { \"fault\": [ \"SEAGATE_ST12000NM002G_ZL2KTGCK0000C149\" ], \"ident\": [] }",
"ceph device light off DEVICE_NAME FAULT/INDENT --force",
"ceph device light off SEAGATE_ST12000NM002G_ZL2KTGCK0000C149 fault --force",
"ceph-volume lvm list",
"ceph-volume lvm activate --bluestore OSD_ID OSD_FSID",
"ceph-volume lvm activate --bluestore 10 7ce687d9-07e7-4f8f-a34e-d1b0efb89920",
"ceph-volume lvm activate --all",
"ceph-volume lvm trigger SYSTEMD_DATA",
"ceph-volume lvm trigger 10 7ce687d9-07e7-4f8f-a34e-d1b0efb89920",
"ceph-volume lvm list",
"ceph-volume lvm deactivate OSD_ID",
"ceph-volume lvm deactivate 16",
"ceph-volume lvm create --bluestore --data VOLUME_GROUP / LOGICAL_VOLUME",
"ceph-volume lvm create --bluestore --data example_vg/data_lv",
"cephadm shell",
"ceph orch daemon stop osd.1",
"cephadm shell --mount /var/lib/ceph/72436d46-ca06-11ec-9809-ac1f6b5635ee/osd.1:/var/lib/ceph/osd/ceph-1",
"ceph-volume lvm new-db --osd-id OSD_ID --osd-fsid OSD_FSID --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME",
"ceph-volume lvm new-db --osd-id 1 --osd-fsid 7ce687d9-07e7-4f8f-a34e-d1b0efb89921 --target vgname/new_db ceph-volume lvm new-wal --osd-id 1 --osd-fsid 7ce687d9-07e7-4f8f-a34e-d1b0efb89921 --target vgname/new_wal",
"ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from data --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME",
"ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from data --target vgname/db",
"ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from data --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME",
"ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from data --target vgname/new_db",
"ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from db --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME",
"ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from db --target vgname/new_db",
"ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from data db --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME",
"ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from data db --target vgname/new_db",
"ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from data db wal --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME",
"ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from data db --target vgname/new_db",
"ceph-volume lvm migrate --osd-id OSD_ID --osd-fsid OSD_UUID --from db wal --target VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME",
"ceph-volume lvm migrate --osd-id 1 --osd-fsid 0263644D-0BF1-4D6D-BC34-28BD98AE3BC8 --from db wal --target vgname/data",
"ceph-volume lvm list ====== osd.3 ======= [db] /dev/db-test/db1 block device /dev/test/lv1 block uuid N5zoix-FePe-uExe-UngY-D9YG-BMs0-1tTDyB cephx lockbox secret cluster fsid 1a6112da-ed05-11ee-bacd-525400565cda cluster name ceph crush device class db device /dev/db-test/db1 db uuid 1TUaDY-3mEt-fReP-cyB2-JyZ1-oUPa-hKPfo6 encrypted 0 osd fsid 94ff742c-7bfd-4fb5-8dc4-843d10ac6731 osd id 3 osdspec affinity None type db vdo 0 devices /dev/vdh [block] /dev/test/lv1 block device /dev/test/lv1 block uuid N5zoix-FePe-uExe-UngY-D9YG-BMs0-1tTDyB cephx lockbox secret cluster fsid 1a6112da-ed05-11ee-bacd-525400565cda cluster name ceph crush device class db device /dev/db-test/db1 db uuid 1TUaDY-3mEt-fReP-cyB2-JyZ1-oUPa-hKPfo6 encrypted 0 osd fsid 94ff742c-7bfd-4fb5-8dc4-843d10ac6731 osd id 3 osdspec affinity None type block vdo 0 devices /dev/vdg",
"vgs VG #PV #LV #SN Attr VSize VFree db-test 1 1 0 wz--n- <200.00g <160.00g test 1 1 0 wz--n- <200.00g <170.00g",
"systemctl stop [email protected]",
"lvresize -l 100%FREE /dev/db-test/db1 Size of logical volume db-test/db1 changed from 40.00 GiB (10240 extents) to <160.00 GiB (40959 extents). Logical volume db-test/db1 successfully resized.",
"cephadm shell -m /var/lib/ceph/ CLUSTER_FSID /osd. OSD_ID :/var/lib/ceph/osd/ceph- OSD_ID :z",
"cephadm shell -m /var/lib/ceph/1a6112da-ed05-11ee-bacd-525400565cda/osd.3:/var/lib/ceph/osd/ceph-3:z",
"ceph-bluestore-tool show-label --path OSD_DIRECTORY_PATH",
"ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-3/ inferring bluefs devices from bluestore path { \"/var/lib/ceph/osd/ceph-3/block\": { \"osd_uuid\": \"94ff742c-7bfd-4fb5-8dc4-843d10ac6731\", \"size\": 32212254720, \"btime\": \"2024-04-03T08:34:12.742848+0000\", \"description\": \"main\", \"bfm_blocks\": \"7864320\", \"bfm_blocks_per_key\": \"128\", \"bfm_bytes_per_block\": \"4096\", \"bfm_size\": \"32212254720\", \"bluefs\": \"1\", \"ceph_fsid\": \"1a6112da-ed05-11ee-bacd-525400565cda\", \"ceph_version_when_created\": \"ceph version 19.0.0-2493-gd82c9aa1 (d82c9aa17f09785fe698d262f9601d87bb79f962) squid (dev)\", \"created_at\": \"2024-04-03T08:34:15.637253Z\", \"elastic_shared_blobs\": \"1\", \"kv_backend\": \"rocksdb\", \"magic\": \"ceph osd volume v026\", \"mkfs_done\": \"yes\", \"osd_key\": \"AQCEFA1m9xuwABAAwKEHkASVbgB1GVt5jYC2Sg==\", \"osdspec_affinity\": \"None\", \"ready\": \"ready\", \"require_osd_release\": \"19\", \"whoami\": \"3\" }, \"/var/lib/ceph/osd/ceph-3/block.db\": { \"osd_uuid\": \"94ff742c-7bfd-4fb5-8dc4-843d10ac6731\", \"size\": 40794497536, \"btime\": \"2024-04-03T08:34:12.748816+0000\", \"description\": \"bluefs db\" } }",
"ceph-bluestore-tool bluefs-bdev-expand --path OSD_DIRECTORY_PATH",
"ceph-bluestore-tool bluefs-bdev-expand --path /var/lib/ceph/osd/ceph-3/ inferring bluefs devices from bluestore path 1 : device size 0x27ffbfe000 : using 0x2300000(35 MiB) 2 : device size 0x780000000 : using 0x52000(328 KiB) Expanding DB/WAL 1 : expanding to 0x171794497536 1 : size label updated to 171794497536",
"ceph-bluestore-tool show-label --path OSD_DIRECTORY_PATH",
"ceph-bluestore-tool show-label --path /var/lib/ceph/osd/ceph-3/ inferring bluefs devices from bluestore path { \"/var/lib/ceph/osd/ceph-3/block\": { \"osd_uuid\": \"94ff742c-7bfd-4fb5-8dc4-843d10ac6731\", \"size\": 32212254720, \"btime\": \"2024-04-03T08:34:12.742848+0000\", \"description\": \"main\", \"bfm_blocks\": \"7864320\", \"bfm_blocks_per_key\": \"128\", \"bfm_bytes_per_block\": \"4096\", \"bfm_size\": \"32212254720\", \"bluefs\": \"1\", \"ceph_fsid\": \"1a6112da-ed05-11ee-bacd-525400565cda\", \"ceph_version_when_created\": \"ceph version 19.0.0-2493-gd82c9aa1 (d82c9aa17f09785fe698d262f9601d87bb79f962) squid (dev)\", \"created_at\": \"2024-04-03T08:34:15.637253Z\", \"elastic_shared_blobs\": \"1\", \"kv_backend\": \"rocksdb\", \"magic\": \"ceph osd volume v026\", \"mkfs_done\": \"yes\", \"osd_key\": \"AQCEFA1m9xuwABAAwKEHkASVbgB1GVt5jYC2Sg==\", \"osdspec_affinity\": \"None\", \"ready\": \"ready\", \"require_osd_release\": \"19\", \"whoami\": \"3\" }, \"/var/lib/ceph/osd/ceph-3/block.db\": { \"osd_uuid\": \"94ff742c-7bfd-4fb5-8dc4-843d10ac6731\", \"size\": 171794497536, \"btime\": \"2024-04-03T08:34:12.748816+0000\", \"description\": \"bluefs db\" } }",
"systemctl start [email protected] osd.3 host01 running (15s) 0s ago 13m 46.9M 4096M 19.0.0-2493-gd82c9aa1 3714003597ec 02150b3b6877",
"ceph-volume lvm batch --bluestore PATH_TO_DEVICE [ PATH_TO_DEVICE ]",
"ceph-volume lvm batch --bluestore /dev/sda /dev/sdb /dev/nvme0n1",
"ceph-volume lvm zap VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME [--destroy]",
"ceph-volume lvm zap osd-vg/data-lv",
"ceph-volume lvm zap DEVICE_PATH_PARTITION [--destroy]",
"ceph-volume lvm zap /dev/sdc1",
"ceph-volume lvm zap DEVICE_PATH --destroy",
"ceph-volume lvm zap /dev/sdc --destroy",
"ceph-volume lvm zap --destroy --osd-id OSD_ID",
"ceph-volume lvm zap --destroy --osd-id 16",
"ceph-volume lvm zap --destroy --osd-fsid OSD_FSID",
"ceph-volume lvm zap --destroy --osd-fsid 65d7b6b1-e41a-4a3c-b363-83ade63cb32b"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/administration_guide/the-ceph-volume-utility |
Chapter 4. Upgrading Red Hat JBoss Web Server using this Service Pack | Chapter 4. Upgrading Red Hat JBoss Web Server using this Service Pack If you have installed Red Hat JBoss Web Server from RPM packages on Red Hat Enterprise Linux, you can use the following yum command to upgrade to the latest service pack: Note Red Hat distributes the JBoss Web Server 6.0 Service Pack 4 release in RPM packages only. | [
"yum upgrade"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_4_release_notes/upgrading_red_hat_jboss_web_server_using_this_service_pack |
Chapter 3. Encryption | Chapter 3. Encryption There are two main types of data that must be protected: data at rest and data in motion. These different types of data are protected in similar ways using similar technology but the implementations can be completely different. No single protective implementation can prevent all possible methods of compromise as the same information may be at rest and in motion at different points in time. 3.1. Data at Rest Data at rest is data that is stored on a hard drive, tape, CD, DVD, disk, or other media. This information's biggest threat comes from being physically stolen. Laptops in airports, CDs going through the mail, and backup tapes that get left in the wrong places are all examples of events where data can be compromised through theft. If the data is encrypted on the media, it lowers the chances of the data being accessed. 3.1.1. Full Disk Encryption Full disk or partition encryption is one of the best ways of protecting your data. Not only is each file protected but also the temporary storage that may contain parts of these files is also protected. Full disk encryption will protect all of your files so you do not have to worry about selecting what you want to protect and possibly missing a file. Red Hat Enterprise Linux 6 natively supports LUKS Encryption. LUKS bulk encrypts your hard drive partitions so that while your computer is off, your data is protected. This will also protect your computer from attackers attempting to use single-user-mode to login to your computer or otherwise gain access. Full disk encryption solutions like LUKS only protect the data when your computer is off. Once the computer is on and LUKS has decrypted the disk, the files on that disk are available to anyone who would normally have access to them. To protect your files when the computer is on, use full disk encryption in combination with another solution such as file based encryption. Also remember to lock your computer whenever you are away from it. A passphrase protected screen saver set to activate after a few minutes of inactivity is a good way to keep intruders out. For more information on LUKS, see Section 3.1.3, "LUKS Disk Encryption" . 3.1.2. File-Based Encryption File-based encryption is used to protect the contents of files on mobile storage devices, such as CDs, flash drives, or external hard drives. Some file-based encryption solutions may leave remnants of the encrypted files that an attacker who has physical access to your computer can recover under some circumstances. To protect the contents of these files from attackers who may have access to your computer, use file-based encryption combined with another solution, such as full disk encryption. 3.1.3. LUKS Disk Encryption Linux Unified Key Setup-on-disk-format (or LUKS) allows you to encrypt partitions on your Linux computer. This is particularly important when it comes to mobile computers and removable media. LUKS allows multiple user keys to decrypt a master key which is used for the bulk encryption of the partition. Overview of LUKS What LUKS does LUKS encrypts entire block devices and is therefore well-suited for protecting the contents of mobile devices such as removable storage media or laptop disk drives. The underlying contents of the encrypted block device are arbitrary. This makes it useful for encrypting swap devices. This can also be useful with certain databases that use specially formatted block devices for data storage. LUKS uses the existing device mapper kernel subsystem. LUKS provides passphrase strengthening which protects against dictionary attacks. LUKS devices contain multiple key slots, allowing users to add backup keys/passphrases. What LUKS does not do: LUKS is not well-suited for applications requiring many (more than eight) users to have distinct access keys to the same device. LUKS is not well-suited for applications requiring file-level encryption. 3.1.3.1. LUKS Implementation in Red Hat Enterprise Linux Red Hat Enterprise Linux 6 utilizes LUKS to perform file system encryption. By default, the option to encrypt the file system is unchecked during the installation. If you select the option to encrypt your hard drive, you will be prompted for a passphrase that will be asked every time you boot the computer. This passphrase "unlocks" the bulk encryption key that is used to decrypt your partition. If you choose to modify the default partition table you can choose which partitions you want to encrypt. This is set in the partition table settings. The default cipher used for LUKS (refer to cryptsetup --help ) is aes-cbc-essiv:sha256. Note that the installation program, Anaconda, uses by default the AES cipher in XTS mode, aes-xts-plain64. The default key size for LUKS is 256 bits. The default key size for LUKS with Anaconda (XTS mode) is 512 bits. Warning Changing the default cryptographic attributes can affect your system's performance and expose your system to various security risks. You should not change the default cryptographic attributes of your system without good knowledge of cryptography and understanding to the capabilities of the used cipher combinations. Red Hat strongly recommends using the default ciphers. If you need to use any other cipher than the cipher that is configured as the default, you can initialize your partition with the --cipher and --key-size options. The syntax of the command is the following: where <cipher> - <mode> - <iv> is a string representing the used cipher. The string consists of three parts: a block cipher, block cipher mode, and an initial vector (IV). A block cipher is a deterministic algorithm that operates on data blocks and allows encryption and decryption of bulk data. Block ciphers that are available on Red Hat Enterprise Linux are: AES - Advanced Encryption Standard, a 128-bit symmetric block cipher using encryption keys with lengths of 128, 192, and 256 bits; for more information, see the FIPS PUB 197 . Twofish - A 128-bit block cipher operating with encryption keys of the range from 128 bits to 256 bits. Serpent - A 128-bit block cipher operating with 128-bit, 192-bit and 256-bit encryption keys. cast5 - A 64-bit Feistel cipher supporting encryption keys of the range from 40 to 128 bits; for more information, see the RFC 2144 . cast6 - A 128-bit Feistel cipher using 128-bit, 160-bit, 192-bit, 224-bit, or 256-bit encryption keys; for more information, see the RFC 2612 . Block cipher mode describes a way the block cipher is repeatedly applied on bulk data in order to encrypt or decrypt the data securely. The following modes can be used: CBC - Cipher Block Chaining; for more information, see the NIST SP 800-38A . XTS - XEX Tweakable Block Cipher with Ciphertext Stealing; for more information, see the IEEE 1619, or NIST SP 800-38E . CTR - Counter; for more information, see the NIST SP 800-38A . ECB - Electronic Codebook; for more information, see the NIST SP 800-38A . CFB - Cipher Feedback; for more information, see the NIST SP 800-38A . OFB - Output Feedback; for more information, see the NIST SP 800-38A . An initial vector is a block of data used for ciphertext randomization. IV ensures that repeated encryption of the same plain text provides different ciphertext output. IV must not be reused with the same encryption key. For ciphers in CBC mode, IV must be unpredictable , otherwise the system could become vulnerable to certain watermarking attacks (see LUKS/cryptsetup FAQ for more information). Red Hat recommends using the following IV with AES: ESSIV - Encrypted Salt-Sector Initialization Vector - This IV should be used for ciphers in CBC mode. You should use the default hash: sha256. plain64 (or plain) - IV sector offset - This IV should be used for ciphers in XTS mode. You may also specify the length of the used encryption key. The size of the key depends on the used combination of the block cipher and block cipher mode. If you do not specify the key length, LUKS will use the default value for the given combination. For example: if you decide to use a 128-bit key for AES in CBC mode, LUKS will encrypt your partition using the AES-128 implementation, while specifying a 512-bit key for AES in XTS mode means that the AES-256 implementation will be used. Note that XTS mode operates with two keys, the first is determined for tweakable encryption and the second for regular encryption. 3.1.3.2. Manually Encrypting Directories Warning Following this procedure will remove all data on the partition that you are encrypting. You WILL lose all your information! Make sure you backup your data to an external source before beginning this procedure! Enter runlevel 1 by typing the following at a shell prompt as root: Unmount your existing /home : If the command in the step fails, use fuser to find processes hogging /home and kill them: Verify /home is no longer mounted: Fill your partition with random data: This command proceeds at the sequential write speed of your device and may take some time to complete. It is an important step to ensure no unencrypted data is left on a used device, and to obfuscate the parts of the device that contain encrypted data as opposed to just random data. Initialize your partition: Open the newly encrypted device: Make sure the device is present: Create a file system: Mount the file system: Make sure the file system is visible: Add the following to the /etc/crypttab file: Edit the /etc/fstab file, removing the old entry for /home and adding the following line: Restore default SELinux security contexts: Reboot the machine: The entry in the /etc/crypttab makes your computer ask your luks passphrase on boot. Log in as root and restore your backup. You now have an encrypted partition for all of your data to safely rest while the computer is off. 3.1.3.3. Adding a New Passphrase to an Existing Device Use the following command to add a new passphrase to an existing device: After being prompted for any one of the existing passprases for authentication, you will be prompted to enter the new passphrase. 3.1.3.4. Removing a Passphrase from an Existing Device Use the following command to remove a passphrase from an existing device: You will be prompted for the passphrase you want to remove and then for any one of the remaining passphrases for authentication. 3.1.3.5. Creating Encrypted Block Devices in Anaconda You can create encrypted devices during system installation. This allows you to easily configure a system with encrypted partitions. To enable block device encryption, check the Encrypt System check box when selecting automatic partitioning or the Encrypt check box when creating an individual partition, software RAID array, or logical volume. After you finish partitioning, you will be prompted for an encryption passphrase. This passphrase will be required to access the encrypted devices. If you have pre-existing LUKS devices and provided correct passphrases for them earlier in the install process the passphrase entry dialog will also contain a check box. Checking this check box indicates that you would like the new passphrase to be added to an available slot in each of the pre-existing encrypted block devices. Note Checking the Encrypt System check box on the Automatic Partitioning screen and then choosing Create custom layout does not cause any block devices to be encrypted automatically. Note You can use a kickstart file to set a separate passphrase for each new encrypted block device. Also, kickstart allows you to specify a different type of encryption if the Anaconda default cipher, aes-xts-plain64, does not suit you. In dependencies on a device you want to encrypt, you can specify the --cipher= <cipher-string> along with the autopart , part , partition , logvol , and raid directives. This option has to be used together with the --encrypted option, otherwise it has no effect. For more information about the <cipher-string> format and possible cipher combinations, see Section 3.1.3.1, "LUKS Implementation in Red Hat Enterprise Linux" . For more information about kickstart configuration, see the Red Hat Enterprise Linux 6 Installation Guide . 3.1.3.6. Additional Resources For additional information on LUKS or encrypting hard drives under Red Hat Enterprise Linux, visit one of the following links: LUKS home page LUKS/cryptsetup FAQ LUKS - Linux Unified Key Setup Wikipedia Article HOWTO: Creating an encrypted Physical Volume (PV) using a second hard drive and pvmove | [
"cryptsetup --verify-passphrase --cipher <cipher> - <mode> - <iv> --key-size <key-size> luksFormat <device>",
"telinit 1",
"umount /home",
"fuser -mvk /home",
"grep home /proc/mounts",
"shred -v --iterations=1 /dev/VG00/LV_home",
"cryptsetup --verbose --verify-passphrase luksFormat /dev/VG00/LV_home",
"cryptsetup luksOpen /dev/VG00/LV_home home",
"ls -l /dev/mapper | grep home",
"mkfs.ext3 /dev/mapper/home",
"mount /dev/mapper/home /home",
"df -h | grep home",
"home /dev/VG00/LV_home none",
"/dev/mapper/home /home ext3 defaults 1 2",
"/sbin/restorecon -v -R /home",
"shutdown -r now",
"cryptsetup luksAddKey <device>",
"cryptsetup luksRemoveKey <device>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/chap-security_guide-encryption |
10.5.11. Include | 10.5.11. Include Include allows other configuration files to be included at runtime. The path to these configuration files can be absolute or relative to the ServerRoot . Important For the server to use individually packaged modules, such as mod_ssl , mod_perl , and php , the following directive must be included in Section 1: Global Environment of httpd.conf : | [
"Include conf.d/*.conf"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-include |
Appendix C. Mod_proxy_cluster connector modules | Appendix C. Mod_proxy_cluster connector modules The mod_proxy_cluster connector is a reduced-configuration, intelligent load-balancing solution based on technology that the JBoss mod_cluster community project originally developed. The mod_proxy_cluster connector enables the Apache HTTP Server to act as an advanced load-balancer for forwarding traffic to back-end applications running on JBoss Web Server or JBoss EAP hosts. The mod_proxy_cluster connector provides all the functionality of mod_jk and additional features such as real-time load-balancing calculations, application life-cycle control, automatic proxy discovery, and multiple protocol support. This appendix describes the modules that the mod_proxy_cluster connector uses. Note You can configure the mod_proxy_cluster connector by using the configurable directives for mod_proxy , such as ProxyIOBufferSize . C.1. Mod_manager.so module and directives The cluster manager module, mod_manager.so , receives and acknowledges messages from nodes, including worker node registrations, worker node load data, and worker node application life cycle events. Configurable directives for mod_manager.so Configurable directives in the <VirtualHost> element are as follows: EnableMCPMReceive Allows the VirtualHost to receive mod_cluster management protocol (MCMP) messages. Add one EnableMCPMReceive directive to the Apache HTTP Server configuration to allow mod_proxy_cluster to operate correctly. EnableMCPMReceive must be added in the VirtualHost configuration at the location where advertise is configured. MaxMCMPMaxMessSize Defines the maximum size of MCMP messages. The default value is calculated from other Max directives. The minimum value is 1024 . AllowDisplay Toggles the additional display on the mod_cluster-manager main page. The default value is off , which causes only version information to display on the mod_cluster-manager main page. AllowCmd Toggles permissions for commands using mod_cluster-manager URL. The default value is on , which allows commands. ReduceDisplay Toggles the reduction of information displayed on the mod_cluster-manager page. Reducing the information allows more nodes to display on the page. The default value is off , which allows all the available information to display. MemManagerFile Defines the location for the files in which mod_manager stores configuration details. mod_manager also uses this location for generated keys for shared memory and lock files. This must be an absolute path name . It is recommended that this path be on a local drive, and not an NFS share. The default value is /logs/ . Maxcontext The maximum number of contexts that mod_proxy_cluster will use. The default value is 100 . Maxnode The maximum number of worker nodes that mod_proxy_cluster will use. The default value is 20 . Maxhost The maximum number of hosts (aliases) that mod_proxy_cluster will use. This is also the maximum number of load balancers. The default value is 20 . Maxsessionid The maximum number of active session identifiers stored. A session is considered inactive when no information is received from that session for five minutes. This is used for demonstration and debugging purposes only. The default value is 0 , which disables this logic. ManagerBalancerName The name of the load balancer to use when the worker node does not provide a load balancer name. The default value is mycluster . PersistSlots When set to on , nodes, aliases, and contexts are persisted in files. The default value is off . CheckNonce When set to on , session identifiers are checked to ensure that they are unique and have not occurred before. The default is on . Note Setting this directive to off can leave your server vulnerable to replay attacks. SetHandler mod_cluster-manager Defines a handler to display information about worker nodes in the cluster. This is defined in the Location element: In this situation, USDLOCATION was also defined as mod_cluster_manager . Consider the following guidelines when accessing the USDLOCATION that is defined in the Location element in your browser: Transferred corresponds to the POST data sent to the worker node. Connected corresponds to the number of requests that had been processed when this status page was requested. Sessions corresponds to the number of active sessions. This field is not present when Maxsessionid is 0 . C.2. Mod_proxy_cluster.so module and directives The Proxy Balancer Module, mod_proxy_cluster.so , handles the routing of requests to cluster nodes. The Proxy Balancer selects the appropriate node to forward the request to based on the application location in the cluster, the current state of each of the cluster nodes, and the Session ID (if a request is part of an established session). Configurable directives for mod_proxy_cluster.so You can also configure the following directives in the <VirtualHost> element to change the load balancing behavior. CreateBalancers Defines how load balancers are created in the Apache HTTP Server virtual hosts. The following values are valid in CreateBalancers : 0 : Create load balancers in all virtual hosts defined in Apache HTTP Server. Remember to configure the load balancers in the ProxyPass directive. 1 : Do not create balancers. When using this value, you must also define the load balancer name in ProxyPass or ProxyPassMatch . 2 : Create only the main server. This is the default value for CreateBalancers . UseAlias Defines whether to check that the defined Alias corresponds to the ServerName . The following values are valid for UseAlias : 0 : Ignore alias information from worker nodes. This is the default value for UseAlias . 1 : Verify that the defined alias corresponds to a worker node's server name. LBstatusRecalTime Defines the interval in seconds between the proxy calculating the status of a worker node. The default interval is 5 seconds. ProxyPassMatch; ProxyPass ProxyPass maps remote servers into the local server namespace. If the local server has an address such as http://local.com/ , the following ProxyPass directive converts a local request for http://local.com/requested/file1 into a proxy request for http://worker.local.com/file1 . ProxyPassMatch uses regular expressions to match local paths to which the proxied URL should apply. For either directive, ! indicates that a specified path is local, and a request for that path should not be routed to a remote server. For example, the following directive specifies that gif files should be served locally. UseNocanon Defines whether to forward the original URL path to the back end without modifications. The default value is Off . When the UseNocanon directive is set to Off , the proxy can forward modified URLs to the back end. However, if the back-end application expects the original URL path that the client requested, the modified URL path can lead to unexpected issues. When you set the UseNocanon directive to On , the proxy can forward the original URL path to the back end without any modifications. In this situation, the proxy behavior depends on whether you also define a context and a ProxyPass directive for the requested URL in the mod_proxy_cluster.conf file. A context is also known as a virtual host definition . Consider the following guidelines when you set the UseNocanon directive to On : If you define a context for the requested URL but you do not define a ProxyPass directive for this URL, the proxy uses the UseNocanon directive. If you define both a context and a ProxyPass directive for the requested URL, and the ProxyPass directive includes the nocanon flag, the proxy uses the nocanon flag and ignores the UseNocanon directive. If you define both a context and a ProxyPass directive for the requested URL, and the ProxyPass directive excludes the nocanon flag, the proxy ignores the UseNocanon directive. Note If you do not define a context for the requested URL, mod_proxy_cluster returns a 404 error. ResponseStatusCodeOnNoContext Defines the response status code that the server sends to the client when a ProxyPass or ProxyPassMatch directive does not have a matching context. The default value is 404 , which means that the server sends a Not Found error response by default. In JBCS 2.4.51 or earlier, when a ProxyPass or ProxyPassMatch directive did not have a matching context, the server sent a 503 Service Unavailable response by default. If you want to preserve the default behavior that was available in earlier releases, set the ResponseStatusCodeOnNoContext directive to 503 instead. Note If you specify a value other than a standard HTTP response code, the server access log shows the specified value but the server sends a 500 Internal Server Error response to the client. C.3. Mod_advertise.so module and directives The Proxy Advertisement Module, mod_advertise.so , broadcasts the existence of the proxy server via UDP multicast messages. The server advertisement messages contain the IP address and port number where the proxy is listening for responses from nodes that wish to join the load-balancing cluster. The mod_advertise module must be defined along with the mod_manager module in the VirtualHost element. In the following example, the identifier for the mod_advertise module is advertise_module : Configurable directives for mod_advertise.so The mod_advertise module is configurable by using the following directives: ServerAdvertise Defines how the advertising mechanism is used. The default value is Off . When set to Off , the proxy does not advertise its location. When set to On , the advertising mechanism is used to tell worker nodes to send status information to this proxy. You can also specify a host name and port with the following syntax: ServerAdvertise On http:// HOSTNAME : PORT / . This is only required when using a name-based virtual host, or when a virtual host is not defined. AdvertiseGroup Defines the multicast address to advertise on. The syntax is AdvertiseGroup ADDRESS : PORT , where ADDRESS must correspond to AdvertiseGroupAddress , and PORT must correspond to AdvertisePort in your worker nodes. If your worker node is JBoss EAP-based, and the -u switch is used at startup, the default AdvertiseGroupAddress is the value passed via the -u switch. The default value is 224.0.1.105:23364 . If a port is not specified, the port defaults to 23364 . AdvertiseFrequency The interval (in seconds) between multicast messages advertising the IP address and port. The default value is 10 . AdvertiseSecurityKey Defines a string that is used to identify mod_proxy_cluster in Apache HTTP Server. By default, this directive is not set and no information is sent. AdvertiseManagerUrl Defines the URL that the worker node should use to send information to the proxy server. By default this directive is not set and no information is sent. AdvertiseBindAddress Defines the address and port over which to send multicast messages. The syntax is AdvertiseBindAddress ADDRESS : PORT . This allows an address to be specified on machines with multiple IP addresses. The default value is 0.0.0.0:23364 . C.4. Mod_cluster_slotmem.so module The mod_cluster_slotmem.so module is a shared memory provider for creating and accessing a shared memory segment in which the data sets are organized in "slots". The mod_cluster_slotmem module does not require any configuration directives. C.5. Additional resources (or steps) Worker node configuration reference for mod_proxy_cluster | [
"LoadModule manager_module modules/mod_manager.so",
"<Location USDLOCATION> SetHandler mod_cluster-manager Require ip 127.0.0.1 </Location>",
"LoadModule proxy_cluster_module modules/mod_proxy_cluster.so",
"ProxyPass /requested/ http://worker.local.com/",
"ProxyPassMatch ^(/.*\\.gif)USD !",
"LoadModule advertise_module modules/mod_advertise.so"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/apache_http_server_connectors_and_load_balancing_guide/mod_proxy_cluster_modules |
Chapter 4. Modifying a machine set | Chapter 4. Modifying a machine set You can modify a machine set, such as adding labels, changing the instance type, or changing block storage. On Red Hat Virtualization (RHV), you can also change a machine set to provision new nodes on a different storage domain. Note If you need to scale a machine set without making other changes, see Manually scaling a machine set . 4.1. Modifying a machine set To make changes to a machine set, edit the MachineSet YAML. Then, remove all machines associated with the machine set by deleting each machine or scaling down the machine set to 0 replicas. Then, scale the replicas back to the desired number. Changes you make to a machine set do not affect existing machines. If you need to scale a machine set without making other changes, you do not need to delete the machines. Note By default, the OpenShift Container Platform router pods are deployed on workers. Because the router is required to access some cluster resources, including the web console, do not scale the worker machine set to 0 unless you first relocate the router pods. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure Edit the machine set: USD oc edit machineset <machineset> -n openshift-machine-api Scale down the machine set to 0 : USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0 Wait for the machines to be removed. Scale up the machine set as needed: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 Wait for the machines to start. The new machines contain changes you made to the machine set. 4.2. Migrating nodes to a different storage domain on RHV You can migrate the OpenShift Container Platform control plane and compute nodes to a different storage domain in a Red Hat Virtualization (RHV) cluster. 4.2.1. Migrating compute nodes to a different storage domain in RHV Prerequisites You are logged in to the Manager. You have the name of the target storage domain. Procedure Identify the virtual machine template: USD oc get -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.template_name}{"\n"}' machineset -A Create a new virtual machine in the Manager, based on the template you identified. Leave all other settings unchanged. For details, see Creating a Virtual Machine Based on a Template in the Red Hat Virtualization Virtual Machine Management Guide . Tip You do not need to start the new virtual machine. Create a new template from the new virtual machine. Specify the target storage domain under Target . For details, see Creating a Template in the Red Hat Virtualization Virtual Machine Management Guide . Add a new machine set to the OpenShift Container Platform cluster with the new template. Get the details of the current machine set: USD oc get machineset -o yaml Use these details to create a machine set. For more information see Creating a machine set . Enter the new virtual machine template name in the template_name field. Use the same template name you used in the New template dialog in the Manager. Note the names of both the old and new machine sets. You need to refer to them in subsequent steps. Migrate the workloads. Scale up the new machine set. For details on manually scaling machine sets, see Scaling a machine set manually . OpenShift Container Platform moves the pods to an available worker when the old machine is removed. Scale down the old machine set. Remove the old machine set: USD oc delete machineset <machineset-name> Additional resources Creating a machine set . Scaling a machine set manually Controlling pod placement using the scheduler 4.2.2. Migrating control plane nodes to a different storage domain on RHV OpenShift Container Platform does not manage control plane nodes, so they are easier to migrate than compute nodes. You can migrate them like any other virtual machine on Red Hat Virtualization (RHV). Perform this procedure for each node separately. Prerequisites You are logged in to the Manager. You have identified the control plane nodes. They are labeled master in the Manager. Procedure Select the virtual machine labeled master . Shut down the virtual machine. Click the Disks tab. Click the virtual machine's disk. Click More Actions and select Move . Select the target storage domain and wait for the migration process to complete. Start the virtual machine. Verify that the OpenShift Container Platform cluster is stable: USD oc get nodes The output should display the node with the status Ready . Repeat this procedure for each control plane node. | [
"oc edit machineset <machineset> -n openshift-machine-api",
"oc scale --replicas=0 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0",
"oc scale --replicas=2 machineset <machineset> -n openshift-machine-api",
"oc edit machineset <machineset> -n openshift-machine-api",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2",
"oc get -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.template_name}{\"\\n\"}' machineset -A",
"oc get machineset -o yaml",
"oc delete machineset <machineset-name>",
"oc get nodes"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/machine_management/modifying-machineset |
Chapter 16. Using the Red Hat Marketplace | Chapter 16. Using the Red Hat Marketplace The Red Hat Marketplace is an open cloud marketplace that makes it easy to discover and access certified software for container-based environments that run on public clouds and on-premises. 16.1. Red Hat Marketplace features Cluster administrators can use the Red Hat Marketplace to manage software on OpenShift Container Platform, give developers self-service access to deploy application instances, and correlate application usage against a quota. 16.1.1. Connect OpenShift Container Platform clusters to the Marketplace Cluster administrators can install a common set of applications on OpenShift Container Platform clusters that connect to the Marketplace. They can also use the Marketplace to track cluster usage against subscriptions or quotas. Users that they add by using the Marketplace have their product usage tracked and billed to their organization. During the cluster connection process , a Marketplace Operator is installed that updates the image registry secret, manages the catalog, and reports application usage. 16.1.2. Install applications Cluster administrators can install Marketplace applications from within OperatorHub in OpenShift Container Platform, or from the Marketplace web application . You can access installed applications from the web console by clicking Operators > Installed Operators . 16.1.3. Deploy applications from different perspectives You can deploy Marketplace applications from the web console's Administrator and Developer perspectives. The Developer perspective Developers can access newly installed capabilities by using the Developer perspective. For example, after a database Operator is installed, a developer can create an instance from the catalog within their project. Database usage is aggregated and reported to the cluster administrator. This perspective does not include Operator installation and application usage tracking. The Administrator perspective Cluster administrators can access Operator installation and application usage information from the Administrator perspective. They can also launch application instances by browsing custom resource definitions (CRDs) in the Installed Operators list. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/building_applications/red-hat-marketplace |
Chapter 82. undercloud | Chapter 82. undercloud This chapter describes the commands under the undercloud command. 82.1. undercloud backup Backup the undercloud Usage: Table 82.1. Command arguments Value Summary --init [INIT] Initialize environment for backup, using rear or nfs as args which will check for package install and configured ReaR or NFS server. Defaults to: rear. i.e. --init rear. WARNING: This flag will be deprecated and replaced by --setup-rear and --setup-nfs . --setup-nfs Setup the nfs server on the backup node which will install required packages and configuration on the host BackupNode in the ansible inventory. --setup-rear Setup rear on the undercloud host which will install and configure ReaR. --cron Sets up a new cron job that by default will execute a weekly backup at Sundays midnight, but that can be customized by using the tripleo_backup_and_restore_cron extra-var. --db-only Perform a db backup of the undercloud host. the db backup file will be stored in /home/stack with the name openstack-backup-mysql-<timestamp>.sql. --inventory INVENTORY Tripleo inventory file generated with tripleo-ansible- inventory command. Defaults to: /home/stack/tripleo- inventory.yaml. --add-path ADD_PATH Add additional files to backup. defaults to: /home/stack/ i.e. --add-path /this/is/a/folder/ --add- path /this/is/a/texfile.txt --exclude-path EXCLUDE_PATH Exclude path when performing the undercloud backup, this option can be specified multiple times. Defaults to: none i.e. --exclude-path /this/is/a/folder/ --exclude-path /this/is/a/texfile.txt --extra-vars EXTRA_VARS Set additional variables as dict or as an absolute path of a JSON or YAML file type. i.e. --extra-vars {"key": "val", "key2": "val2"} i.e. --extra-vars /path/to/my_vars.yaml i.e. --extra-vars /path/to/my_vars.json. For more information about the variables that can be passed, visit: https://opendev.org/openstack/tripleo-ansible/src/bran ch/master/tripleo_ansible/roles/backup_and_restore/def aults/main.yml. 82.2. undercloud install Install and setup the undercloud Usage: Table 82.2. Command arguments Value Summary --force-stack-update Do a virtual update of the ephemeral heat stack. new or failed deployments always have the stack_action=CREATE. This option enforces stack_action=UPDATE. --no-validations Do not perform undercloud configuration validations --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Print the install command instead of running it -y, --yes Skip yes/no prompt (assume yes). 82.3. undercloud minion install Install and setup the undercloud minion Usage: Table 82.3. Command arguments Value Summary --force-stack-update Do a virtual update of the ephemeral heat stack. new or failed deployments always have the stack_action=CREATE. This option enforces stack_action=UPDATE. --no-validations Do not perform minion configuration validations --dry-run Print the install command instead of running it -y, --yes Skip yes/no prompt (assume yes). 82.4. undercloud minion upgrade Upgrade undercloud minion Usage: Table 82.4. Command arguments Value Summary --force-stack-update Do a virtual update of the ephemeral heat stack. new or failed deployments always have the stack_action=CREATE. This option enforces stack_action=UPDATE. --no-validations Do not perform minion configuration validations --dry-run Print the install command instead of running it -y, --yes Skip yes/no prompt (assume yes). 82.5. undercloud upgrade Upgrade undercloud Usage: Table 82.5. Command arguments Value Summary --force-stack-update Do a virtual update of the ephemeral heat stack. new or failed deployments always have the stack_action=CREATE. This option enforces stack_action=UPDATE. --no-validations Do not perform undercloud configuration validations --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Print the install command instead of running it -y, --yes Skip yes/no prompt (assume yes). --skip-package-updates Flag to skip the package update when performing upgrades and updates | [
"openstack undercloud backup [--init [INIT]] [--setup-nfs] [--setup-rear] [--cron] [--db-only] [--inventory INVENTORY] [--add-path ADD_PATH] [--exclude-path EXCLUDE_PATH] [--extra-vars EXTRA_VARS]",
"openstack undercloud install [--force-stack-update] [--no-validations] [--inflight-validations] [--dry-run] [-y]",
"openstack undercloud minion install [--force-stack-update] [--no-validations] [--dry-run] [-y]",
"openstack undercloud minion upgrade [--force-stack-update] [--no-validations] [--dry-run] [-y]",
"openstack undercloud upgrade [--force-stack-update] [--no-validations] [--inflight-validations] [--dry-run] [-y] [--skip-package-updates]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/undercloud |
Chapter 13. Setting Breakpoints | Chapter 13. Setting Breakpoints Overview To set breakpoints, your project's routing context .xml file must be open in the route editor's Design tab. The Camel debugger supports two types of breakpoints: Unconditional breakpoints - triggered whenever one is encountered during a debugging session Conditional breakpoints - triggered only when the breakpoint's specified condition is met during a debugging session Note You cannot set breakpoints on consumer endpoints or on when or otherwise nodes. Setting unconditional breakpoints With your routing context displayed on the canvas in the Design tab: Select a node whose state you want to examine during the debugging session. Click its icon to set an unconditional breakpoint. Repeat these steps for each node on which you want to set an unconditional breakpoint. Setting conditional breakpoints With your routing context displayed on the canvas in the Design tab: Select a node whose state you want to examine during the debugging session. Click its icon to set a conditional breakpoint and to open the Edit the condition and language of your breakpoint... dialog: Click the Language drop-down menu and select the expression language to use to create the condition that will trigger the breakpoint. Fuse Tooling supports twenty-four expression languages. Some of these languages provide variables for creating conditional expressions, while others do not. Click Variables to display a list of the selected language's supported variables. If a list appears, select in sequence one or more of the variables to create the condition for triggering the breakpoint. The variables you select appear in the Condition text box. If appears, enter the expression directly into the Condition text box. Repeat steps [condBpFirst] through [condBpLast] for each node on which you want to set a conditional breakpoint. Disabling breakpoints You can temporarily disable a breakpoint, leaving it in place, then enable it again later. The button skips over disabled breakpoints during debugging sessions. To disable a breakpoint, select the node on the canvas and click its icon. The breakpoint turns gray, indicating that it has been disabled. To enable a disabled breakpoint, select the node on the canvas and click its icon. Depending on whether the disabled breakpoint is conditional or unconditional, it turns yellow or red, respectively, to indicate that it has been re-enabled. Note You can also disable and re-enable breakpoints during debugging sessions. For details, see Chapter 18, Disabling Breakpoints in a Running Context . Deleting breakpoints You can delete individual breakpoints or all breakpoints. To delete individual breakpoints - in a route container, select the node whose breakpoint you want to delete, and click its icon. To delete all breakpoints in a particular route - right-click the target route's container, and select Delete all breakpoints . To delete all breakpoints in all routes - right-click the canvas, and select Delete all breakpoints . Related topics Chapter 14, Running the Camel Debugger | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/setBreakpoints |
8. Resolved Issues | 8. Resolved Issues 8.1. All Architectures Bugzilla #452919 Previously, if the Red Hat Network applet was used to re-register the client to a different Red Hat Satellite Server , the applet would continue to show updates that had been available on the server, even if they were not available on the current server. The /etc/sysconfig/rhn/rhn-applet would not change to reflect the details of the new server. The version of the applet provided with this update associates a cache of updates with a server url, and therefore ensure that the updates displayed to the user are actually available. This version can also detect when its configuration file has changed. If such a change is detected, the applet will automatically reload the configuration variables and create new server connections. Bugzilla #454690 On some SGI Altix systems that feature the IOC4 multi-function device, you may encounter problems when using attached IDE devices (such as CD-ROM drives). This is caused by a bug in the sgiioc4 IDE driver, which prevents some devices from being detected properly on system boot. You can work around this bug by manually loading the driver, which in turn allows attached IDE devices to be detected properly. To do so, run the following command as root: /sbin/modprobe sgiioc4 Bugzilla #454690 sysreport.legacy used USDHOME as its root directory. In case this environment variable did not exist or the directory it referred to was not writable, sysreport.legacy could not generate its report and would exit with the message Cannot make temp dir . Sysreport.legacy now uses a randomly created directory as its root directory and therefore can generate a report even on a system without a usable USDHOME . Bugzilla #476767 The automount daemon used fixed size buffer of 128 bytes long to receive information from the SIOCGIFCONF ioctl about local interfaces when testing for the proximity of a host corresponding to a given mount. Since the details of each interface are 40 bytes long, the daemon could receive information on no more than three local interfaces. If the host corresponding to the mount had an address that was local but did not correspond to one of the three interfaces the proximity would be classified incorrectly. The automount daemon now dynamically allocates a buffer, ensuring that it is large enough to contain information on all interfaces on the system providing the ability to correctly detect proximity of a host given for an NFS mount. Bugzilla #465237 Automount map entries that refer to multiple hosts in the mount location (replicated mount), the automount daemon probes a list of remote hosts for their proximity and NFS version. If hosts fail to respond, they are removed from the list. If no remote hosts reply at all, the list may become empty. Previously, the daemon did not check if the list was empty following the initial probe which would lead to a segmentation fault (by dereferencing a NULL pointer). This check has been added. Bugzilla #444942 the ttfonts-zh_CN package formerly included the Zhong Yi Song TrueType font. The | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/4.8_release_notes/ar01s08 |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in four versions: 8u, 11u, 17u, and 21u. Packages for the Red Hat build of OpenJDK are available on Red Hat Enterprise Linux and Microsoft Windows platforms and shipped as a JDK and a JRE in the Red Hat Ecosystem Catalog. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.442/pr01 |
Chapter 3. Creating a Maven project for a Red Hat build of Kogito microservice | Chapter 3. Creating a Maven project for a Red Hat build of Kogito microservice Before you can begin developing Red Hat build of Kogito microservices, you need to create a Maven project where you can build your assets and any other related resources for your application. Procedure In a command terminal, navigate to a local folder where you want to store the new project. Enter the following command to generate a project within a defined folder: On Red Hat build of Quarkus On Spring Boot This command generates a sample-kogito Maven project and imports the extension for all required dependencies and configurations to prepare your application for business automation. If you want to enable PMML execution for your project, add the following dependency to the pom.xml file in the Maven project that contains your Red Hat build of Kogito microservices: Dependency to enable PMML execution <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-pmml</artifactId> </dependency> <dependency> <groupId>org.jpmml</groupId> <artifactId>pmml-model</artifactId> </dependency> On Red Hat build of Quarkus, if you plan to run your application on OpenShift, you must also import the smallrye-health extension for the liveness and readiness probes , as shown in the following example: SmallRye Health extension for Red Hat build of Quarkus applications on OpenShift This command generates the following dependency in the pom.xml file of your Red Hat Process Automation Manager project on Red Hat build of Quarkus: SmallRye Health dependency for Red Hat build of Quarkus applications on OpenShift <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency> Open or import the project in your VS Code IDE to view the contents. 3.1. Creating a custom Spring Boot project for Red Hat build of Kogito microservices You can create custom Maven projects using the Spring Boot archetype for your Red Hat build of Kogito microservices. The Spring Boot archetype enables you to add Spring Boot starters or add-ons to your project. A Spring Boot starter is an all-in-one descriptor for your project and requires business automation engines provided by Red Hat build of Kogito. The Spring Boot starters include decisions, rules, and predictions. When your project contains all the assets and you want a quick way to get started with Red Hat build of Kogito, you can use the kogito-spring-boot-starter starter. For a more granular approach, you can use a specific starter, such as kogito-decisions-spring-boot-starter for decisions, or a combination of starters. Red Hat build of Kogito supports the following Spring Boot starters: Decisions Spring Boot starter Starter for providing DMN support to your Spring Boot project. The following is an example of adding a decisions Spring boot starter to your project: <dependencies> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-decisions-spring-boot-starter</artifactId> </dependency> </dependencies> Predictions Spring Boot starter Starter for providing PMML support to your Spring Boot project. The following is an example of adding a predictions Spring boot starter to your project: <dependencies> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-predictions-spring-boot-starter</artifactId> </dependency> </dependencies> Rules Spring Boot starter Starter for providing DRL support to your Spring Boot project. The following is an example of adding a rules Spring boot starter to your project: <dependencies> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-rules-spring-boot-starter</artifactId> </dependency> </dependencies> Procedure In a command terminal, navigate to a local folder where you want to store the new project. Enter one of the following commands to generate a project using the starters or addons property: To generate a project using the starters property, enter the following command: The new project includes the dependencies required to run the decision microservices. You can combine multiple Spring Boot starters using a comma-separated list, such as starters=decisions,rules . To generate a project containing Prometheus monitoring using the addons property, enter the following command: Note When you pass an add-on to the property, the add-on name does not require the kogito-addons-springboot prefix. Also, you can combine the add-ons and starters properties to customize the project. Open or import the project in your IDE to view the contents. | [
"mvn io.quarkus:quarkus-maven-plugin:create -DprojectGroupId=org.acme -DprojectArtifactId=sample-kogito -DprojectVersion=1.0.0-SNAPSHOT -Dextensions=kogito-quarkus",
"mvn archetype:generate -DarchetypeGroupId=org.kie.kogito -DarchetypeArtifactId=kogito-spring-boot-archetype -DgroupId=org.acme -DartifactId=sample-kogito -DarchetypeVersion=1.11.0.Final -Dversion=1.0-SNAPSHOT",
"<dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-pmml</artifactId> </dependency> <dependency> <groupId>org.jpmml</groupId> <artifactId>pmml-model</artifactId> </dependency>",
"mvn quarkus:add-extension -Dextensions=\"smallrye-health\"",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency>",
"<dependencies> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-decisions-spring-boot-starter</artifactId> </dependency> </dependencies>",
"<dependencies> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-predictions-spring-boot-starter</artifactId> </dependency> </dependencies>",
"<dependencies> <dependency> <groupId>org.kie.kogito</groupId> <artifactId>kogito-rules-spring-boot-starter</artifactId> </dependency> </dependencies>",
"mvn archetype:generate -DarchetypeGroupId=org.kie.kogito -DarchetypeArtifactId=kogito-springboot-archetype -DgroupId=org.acme -DartifactId=sample-kogito -DarchetypeVersion=1.11.0.Final -Dversion=1.0-SNAPSHOT -Dstarters=decisions",
"mvn archetype:generate -DarchetypeGroupId=org.kie.kogito -DarchetypeArtifactId=kogito-springboot-archetype -DgroupId=org.acme -DartifactId=sample-kogito -DarchetypeVersion=1.11.0.Final -Dversion=1.0-SNAPSHOT -Dstarters=descisions -Daddons=monitoring-prometheus,persistence-infinispan"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_process_automation_manager/proc-kogito-creating-maven-project_getting-started-kogito-microservices |
Chapter 3. About OpenShift Kubernetes Engine | Chapter 3. About OpenShift Kubernetes Engine As of 27 April 2020, Red Hat has decided to rename Red Hat OpenShift Container Engine to Red Hat OpenShift Kubernetes Engine to better communicate what value the product offering delivers. Red Hat OpenShift Kubernetes Engine is a product offering from Red Hat that lets you use an enterprise class Kubernetes platform as a production platform for launching containers. You download and install OpenShift Kubernetes Engine the same way as OpenShift Container Platform as they are the same binary distribution, but OpenShift Kubernetes Engine offers a subset of the features that OpenShift Container Platform offers. 3.1. Similarities and differences You can see the similarities and differences between OpenShift Kubernetes Engine and OpenShift Container Platform in the following table: Table 3.1. Product comparison for OpenShift Kubernetes Engine and OpenShift Container Platform OpenShift Kubernetes Engine OpenShift Container Platform Fully Automated Installers Yes Yes Over the Air Smart Upgrades Yes Yes Enterprise Secured Kubernetes Yes Yes Kubectl and oc automated command line Yes Yes Operator Lifecycle Manager (OLM) Yes Yes Administrator Web console Yes Yes OpenShift Virtualization Yes Yes User Workload Monitoring Yes Cluster Monitoring Yes Yes Cost Management SaaS Service Yes Yes Platform Logging Yes Developer Web Console Yes Developer Application Catalog Yes Source to Image and Builder Automation (Tekton) Yes OpenShift Service Mesh (Maistra, Kiali, and Jaeger) Yes OpenShift distributed tracing (Jaeger) Yes OpenShift Serverless (Knative) Yes OpenShift Pipelines (Jenkins and Tekton) Yes Embedded Component of IBM Cloud Pak and RHT MW Bundles Yes OpenShift sandboxed containers Yes 3.1.1. Core Kubernetes and container orchestration OpenShift Kubernetes Engine offers full access to an enterprise-ready Kubernetes environment that is easy to install and offers an extensive compatibility test matrix with many of the software elements that you might use in your data center. OpenShift Kubernetes Engine offers the same service level agreements, bug fixes, and common vulnerabilities and errors protection as OpenShift Container Platform. OpenShift Kubernetes Engine includes a Red Hat Enterprise Linux (RHEL) Virtual Datacenter and Red Hat Enterprise Linux CoreOS (RHCOS) entitlement that allows you to use an integrated Linux operating system with container runtime from the same technology provider. The OpenShift Kubernetes Engine subscription is compatible with the Red Hat OpenShift support for Windows Containers subscription. 3.1.2. Enterprise-ready configurations OpenShift Kubernetes Engine uses the same security options and default settings as the OpenShift Container Platform. Default security context constraints, pod security policies, best practice network and storage settings, service account configuration, SELinux integration, HAproxy edge routing configuration, and all other standard protections that OpenShift Container Platform offers are available in OpenShift Kubernetes Engine. OpenShift Kubernetes Engine offers full access to the integrated monitoring solution that OpenShift Container Platform uses, which is based on Prometheus and offers deep coverage and alerting for common Kubernetes issues. OpenShift Kubernetes Engine uses the same installation and upgrade automation as OpenShift Container Platform. 3.1.3. Standard infrastructure services With an OpenShift Kubernetes Engine subscription, you receive support for all storage plugins that OpenShift Container Platform supports. In terms of networking, OpenShift Kubernetes Engine offers full and supported access to the Kubernetes Container Network Interface (CNI) and therefore allows you to use any third-party SDN that supports OpenShift Container Platform. It also allows you to use the included Open vSwitch software defined network to its fullest extent. OpenShift Kubernetes Engine allows you to take full advantage of the OVN Kubernetes overlay, Multus, and Multus plugins that are supported on OpenShift Container Platform. OpenShift Kubernetes Engine allows customers to use a Kubernetes Network Policy to create microsegmentation between deployed application services on the cluster. You can also use the Route API objects that are found in OpenShift Container Platform, including its sophisticated integration with the HAproxy edge routing layer as an out of the box Kubernetes Ingress Controller. 3.1.4. Core user experience OpenShift Kubernetes Engine users have full access to Kubernetes Operators, pod deployment strategies, Helm, and OpenShift Container Platform templates. OpenShift Kubernetes Engine users can use both the oc and kubectl command line interfaces. OpenShift Kubernetes Engine also offers an administrator web-based console that shows all aspects of the deployed container services and offers a container-as-a service experience. OpenShift Kubernetes Engine grants access to the Operator Life Cycle Manager that helps you control access to content on the cluster and life cycle operator-enabled services that you use. With an OpenShift Kubernetes Engine subscription, you receive access to the Kubernetes namespace, the OpenShift Project API object, and cluster-level Prometheus monitoring metrics and events. 3.1.5. Maintained and curated content With an OpenShift Kubernetes Engine subscription, you receive access to the OpenShift Container Platform content from the Red Hat Ecosystem Catalog and Red Hat Connect ISV marketplace. You can access all maintained and curated content that the OpenShift Container Platform eco-system offers. 3.1.6. OpenShift Data Foundation compatible OpenShift Kubernetes Engine is compatible and supported with your purchase of OpenShift Data Foundation. 3.1.7. Red Hat Middleware compatible OpenShift Kubernetes Engine is compatible and supported with individual Red Hat Middleware product solutions. Red Hat Middleware Bundles that include OpenShift embedded in them only contain OpenShift Container Platform. 3.1.8. OpenShift Serverless OpenShift Kubernetes Engine does not include OpenShift Serverless support. Use OpenShift Container Platform for this support. 3.1.9. Quay Integration compatible OpenShift Kubernetes Engine is compatible and supported with a Red Hat Quay purchase. 3.1.10. OpenShift Virtualization OpenShift Kubernetes Engine includes support for the Red Hat product offerings derived from the kubevirt.io open source project. 3.1.11. Advanced cluster management OpenShift Kubernetes Engine is compatible with your additional purchase of Red Hat Advanced Cluster Management (RHACM) for Kubernetes. An OpenShift Kubernetes Engine subscription does not offer a cluster-wide log aggregation solution or support Elasticsearch, Fluentd, or Kibana-based logging solutions. Red Hat OpenShift Service Mesh capabilities derived from the open-source istio.io and kiali.io projects that offer OpenTracing observability for containerized services on OpenShift Container Platform are not supported in OpenShift Kubernetes Engine. 3.1.12. Advanced networking The standard networking solutions in OpenShift Container Platform are supported with an OpenShift Kubernetes Engine subscription. The OpenShift Container Platform Kubernetes CNI plugin for automation of multi-tenant network segmentation between OpenShift Container Platform projects is entitled for use with OpenShift Kubernetes Engine. OpenShift Kubernetes Engine offers all the granular control of the source IP addresses that are used by application services on the cluster. Those egress IP address controls are entitled for use with OpenShift Kubernetes Engine. OpenShift Container Platform offers ingress routing to on cluster services that use non-standard ports when no public cloud provider is in use via the VIP pods found in OpenShift Container Platform. That ingress solution is supported in OpenShift Kubernetes Engine. OpenShift Kubernetes Engine users are supported for the Kubernetes ingress control object, which offers integrations with public cloud providers. Red Hat Service Mesh, which is derived from the istio.io open source project, is not supported in OpenShift Kubernetes Engine. Also, the Kourier Ingress Controller found in OpenShift Serverless is not supported on OpenShift Kubernetes Engine. 3.1.13. OpenShift sandboxed containers OpenShift Kubernetes Engine does not include OpenShift sandboxed containers. Use OpenShift Container Platform for this support. 3.1.14. Developer experience With OpenShift Kubernetes Engine, the following capabilities are not supported: The OpenShift Container Platform developer experience utilities and tools, such as Red Hat OpenShift Dev Spaces. The OpenShift Container Platform pipeline feature that integrates a streamlined, Kubernetes-enabled Jenkins and Tekton experience in the user's project space. The OpenShift Container Platform source-to-image feature, which allows you to easily deploy source code, dockerfiles, or container images across the cluster. Build strategies, builder pods, or Tekton for end user container deployments. The odo developer command line. The developer persona in the OpenShift Container Platform web console. 3.1.15. Feature summary The following table is a summary of the feature availability in OpenShift Kubernetes Engine and OpenShift Container Platform. Where applicable, it includes the name of the Operator that enables a feature. Table 3.2. Features in OpenShift Kubernetes Engine and OpenShift Container Platform Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Fully Automated Installers (IPI) Included Included N/A Customizable Installers (UPI) Included Included N/A Disconnected Installation Included Included N/A Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) entitlement Included Included N/A Existing RHEL manual attach to cluster (BYO) Included Included N/A CRIO Runtime Included Included N/A Over the Air Smart Upgrades and Operating System (RHCOS) Management Included Included N/A Enterprise Secured Kubernetes Included Included N/A Kubectl and oc automated command line Included Included N/A Auth Integrations, RBAC, SCC, Multi-Tenancy Admission Controller Included Included N/A Operator Lifecycle Manager (OLM) Included Included N/A Administrator web console Included Included N/A OpenShift Virtualization Included Included OpenShift Virtualization Operator Compliance Operator provided by Red Hat Included Included Compliance Operator File Integrity Operator Included Included File Integrity Operator Gatekeeper Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Gatekeeper Operator Klusterlet Not Included - Requires separate subscription Not Included - Requires separate subscription N/A Kube Descheduler Operator provided by Red Hat Included Included Kube Descheduler Operator Local Storage provided by Red Hat Included Included Local Storage Operator Node Feature Discovery provided by Red Hat Included Included Node Feature Discovery Operator Performance Profile controller Included Included N/A PTP Operator provided by Red Hat Included Included PTP Operator Service Telemetry Operator provided by Red Hat Not Included Included Service Telemetry Operator SR-IOV Network Operator Included Included SR-IOV Network Operator Vertical Pod Autoscaler Included Included Vertical Pod Autoscaler Cluster Monitoring (Prometheus) Included Included Cluster Monitoring Device Manager (for example, GPU) Included Included N/A Log Forwarding Not Included Included Red Hat OpenShift Logging Operator Telemeter and Insights Connected Experience Included Included N/A Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name OpenShift Cloud Manager SaaS Service Included Included N/A OVS and OVN SDN Included Included N/A MetalLB Included Included MetalLB Operator HAProxy Ingress Controller Included Included N/A Red Hat OpenStack Platform (RHOSP) Kuryr Integration Included Included N/A Ingress Cluster-wide Firewall Included Included N/A Egress Pod and Namespace Granular Control Included Included N/A Ingress Non-Standard Ports Included Included N/A Multus and Available Multus Plugins Included Included N/A Network Policies Included Included N/A IPv6 Single and Dual Stack Included Included N/A CNI Plugin ISV Compatibility Included Included N/A CSI Plugin ISV Compatibility Included Included N/A RHT and IBM middleware a la carte purchases (not included in OpenShift Container Platform or OpenShift Kubernetes Engine) Included Included N/A ISV or Partner Operator and Container Compatibility (not included in OpenShift Container Platform or OpenShift Kubernetes Engine) Included Included N/A Embedded OperatorHub Included Included N/A Embedded Marketplace Included Included N/A Quay Compatibility (not included) Included Included N/A RHEL Software Collections and RHT SSO Common Service (included) Included Included N/A Embedded Registry Included Included N/A Helm Included Included N/A User Workload Monitoring Not Included Included N/A Cost Management SaaS Service Included Included Cost Management Metrics Operator Platform Logging Not Included Included Red Hat OpenShift Logging Operator OpenShift Elasticsearch Operator provided by Red Hat Not Included Cannot be run standalone N/A Developer Web Console Not Included Included N/A Developer Application Catalog Not Included Included N/A Source to Image and Builder Automation (Tekton) Not Included Included N/A OpenShift Service Mesh Not Included Included OpenShift Service Mesh Operator Service Binding Operator Not Included Included Service Binding Operator Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Red Hat OpenShift Serverless Not Included Included OpenShift Serverless Operator Web Terminal provided by Red Hat Not Included Included Web Terminal Operator Red Hat OpenShift Pipelines Operator Not Included Included OpenShift Pipelines Operator Embedded Component of IBM Cloud Pak and RHT MW Bundles Not Included Included N/A Red Hat OpenShift GitOps Not Included Included OpenShift GitOps Red Hat OpenShift Dev Spaces Not Included Included Red Hat OpenShift Dev Spaces Red Hat OpenShift Local Not Included Included N/A Quay Bridge Operator provided by Red Hat Not Included Included Quay Bridge Operator Quay Container Security provided by Red Hat Not Included Included Quay Operator Red Hat OpenShift distributed tracing platform Not Included Included Red Hat OpenShift distributed tracing platform Operator Red Hat OpenShift Kiali Not Included Included Kiali Operator Metering provided by Red Hat (deprecated) Not Included Included N/A Migration Toolkit for Containers Operator Not Included Included Migration Toolkit for Containers Operator Cost management for OpenShift Not included Included N/A JBoss Web Server provided by Red Hat Not included Included JWS Operator Red Hat Build of Quarkus Not included Included N/A Kourier Ingress Controller Not included Included N/A RHT Middleware Bundles Sub Compatibility (not included in OpenShift Container Platform) Not included Included N/A IBM Cloud Pak Sub Compatibility (not included in OpenShift Container Platform) Not included Included N/A OpenShift Do ( odo ) Not included Included N/A Source to Image and Tekton Builders Not included Included N/A OpenShift Serverless FaaS Not included Included N/A IDE Integrations Not included Included N/A OpenShift sandboxed containers Not included Not included {sandboxed-containers-operator} Windows Machine Config Operator Community Windows Machine Config Operator included - no subscription required Red Hat Windows Machine Config Operator included - Requires separate subscription Windows Machine Config Operator Red Hat Quay Not Included - Requires separate subscription Not Included - Requires separate subscription Quay Operator Red Hat Advanced Cluster Management Not Included - Requires separate subscription Not Included - Requires separate subscription Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security Not Included - Requires separate subscription Not Included - Requires separate subscription N/A OpenShift Data Foundation Not Included - Requires separate subscription Not Included - Requires separate subscription OpenShift Data Foundation Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Ansible Automation Platform Resource Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Ansible Automation Platform Resource Operator Business Automation provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Business Automation Operator Data Grid provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Data Grid Operator Red Hat Integration provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration Operator Red Hat Integration - 3Scale provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription 3scale Red Hat Integration - 3Scale APICast gateway provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription 3scale APIcast Red Hat Integration - AMQ Broker Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Broker Red Hat Integration - AMQ Broker LTS Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration - AMQ Interconnect Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Interconnect Red Hat Integration - AMQ Online Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration - AMQ Streams Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Streams Red Hat Integration - Camel K Not Included - Requires separate subscription Not Included - Requires separate subscription Camel K Red Hat Integration - Fuse Console Not Included - Requires separate subscription Not Included - Requires separate subscription Fuse Console Red Hat Integration - Fuse Online Not Included - Requires separate subscription Not Included - Requires separate subscription Fuse Online Red Hat Integration - Service Registry Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Service Registry API Designer provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription API Designer JBoss EAP provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription JBoss EAP Smart Gateway Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Smart Gateway Operator Kubernetes NMState Operator Included Included N/A 3.2. Subscription limitations OpenShift Kubernetes Engine is a subscription offering that provides OpenShift Container Platform with a limited set of supported features at a lower list price. OpenShift Kubernetes Engine and OpenShift Container Platform are the same product and, therefore, all software and features are delivered in both. There is only one download, OpenShift Container Platform. OpenShift Kubernetes Engine uses the OpenShift Container Platform documentation and support services and bug errata for this reason. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/about/oke-about |
5.6. Starting haproxy | 5.6. Starting haproxy To start the HAProxy service, enter the following command: To make the HAProxy service persist through reboots, enter the following command: | [
"systemctl start haproxy.service",
"systemctl enable haproxy.service"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-haproxy-setup-starting |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/building_and_deploying_data_grid_clusters_with_helm/making-open-source-more-inclusive_datagrid |
Chapter 1. Introduction to Camel K | Chapter 1. Introduction to Camel K This chapter introduces the concepts, features, and cloud-native architecture provided by Red Hat Integration - Camel K: Section 1.1, "Camel K overview" Section 1.2, "Camel K features" Section 1.2.3, "Kamelets" Section 1.3, "Camel K development tooling" Section 1.4, "Camel K distributions" 1.1. Camel K overview Red Hat Integration - Camel K is a lightweight integration framework built from Apache Camel K that runs natively in the cloud on OpenShift. Camel K is specifically designed for serverless and microservice architectures. You can use Camel K to instantly run your integration code written in Camel Domain Specific Language (DSL) directly on OpenShift. Camel K is a subproject of the Apache Camel open source community: https://github.com/apache/camel-k . Camel K is implemented in the Go programming language and uses the Kubernetes Operator SDK to automatically deploy integrations in the cloud. For example, this includes automatically creating services and routes on OpenShift. This provides much faster turnaround times when deploying and redeploying integrations in the cloud, such as a few seconds or less instead of minutes. The Camel K runtime provides significant performance optimizations. The Quarkus cloud-native Java framework is enabled by default to provide faster start up times, and lower memory and CPU footprints. When running Camel K in developer mode, you can make live updates to your integration DSL and view results instantly in the cloud on OpenShift, without waiting for your integration to redeploy. Using Camel K with OpenShift Serverless and Knative Serving, containers are created only as needed and are autoscaled under load up and down to zero. This reduces cost by removing the overhead of server provisioning and maintenance and enables you to focus on application development instead. Using Camel K with OpenShift Serverless and Knative Eventing, you can manage how components in your system communicate in an event-driven architecture for serverless applications. This provides flexibility and creates efficiencies through decoupled relationships between event producers and consumers using a publish-subscribe or event-streaming model. Additional resources Apache Camel K website Getting started with OpenShift Serverless 1.2. Camel K features The Camel K includes the following main platforms and features: 1.2.1. Platform and component versions OpenShift Container Platform 4.13, 4.14 OpenShift Serverless 1.35.0 Red Hat Build of Quarkus 2.13.8.Final-redhat-00006 Red Hat Camel Extensions for Quarkus 2.13.3.redhat-00008 Apache Camel K 1.10.5.redhat-00002 Apache Camel 3.18.6.redhat-00007 OpenJDK 11 1.2.2. Camel K features Knative Serving for autoscaling and scale-to-zero Knative Eventing for event-driven architectures Performance optimizations using Quarkus runtime by default Camel integrations written in Java or YAML DSL Development tooling with Visual Studio Code Monitoring of integrations using Prometheus in OpenShift Quickstart tutorials Kamelet Catalog of connectors to external systems such as AWS, Jira, and Salesforce The following diagram shows a simplified view of the Camel K cloud-native architecture: Additional resources Apache Camel architecture 1.2.3. Kamelets Kamelets hide the complexity of connecting to external systems behind a simple interface, which contains all the information needed to instantiate them, even for users who are not familiar with Camel. Kamelets are implemented as custom resources that you can install on an OpenShift cluster and use in Camel K integrations. Kamelets are route templates that use Camel components designed to connect to external systems without requiring deep understanding of the component. Kamelets abstract the details of connecting to external systems. You can also combine Kamelets to create complex Camel integrations, just like using standard Camel components. Additional resources Integrating Applications with Kamelets 1.3. Camel K development tooling The Camel K provides development tooling extensions for Visual Studio (VS) Code, Red Hat CodeReady WorkSpaces, and Eclipse Che. The Camel-based tooling extensions include features such as automatic completion of Camel DSL code, Camel K modeline configuration, and Camel K traits. The following VS Code development tooling extensions are available: VS Code Extension Pack for Apache Camel by Red Hat Tooling for Apache Camel K extension Language Support for Apache Camel extension Debug Adapter for Apache Camel K Additional extensions for OpenShift, Java and more For details on how to set up these VS Code extensions for Camel K, see Setting up your Camel K development environment . Important The following plugin VS Code Language support for Camel - a part of the Camel extension pack provides support for content assist when editing Camel routes and application.properties . To install a supported Camel K tooling extension for VS code to create, run and operate Camel K integrations on OpenShift, see VS Code Tooling for Apache Camel K by Red Hat extension To install a supported Camel debug tool extension for VS code to debug Camel integrations written in Java, YAML or XML locally, see Debug Adapter for Apache Camel by Red Hat For details about configurations and components to use the developer tool with specific product versions, see Camel K Supported Configurations and Camel K Component Details Note: The Camel K VS Code extensions are community features. Eclipse Che also provides these features using the vscode-camelk plug-in. For more information about scope of development support, see Development Support Scope of Coverage Additional resources VS Code tooling for Apache Camel K example Eclipse Che tooling for Apache Camel K 1.4. Camel K distributions Table 1.1. Red Hat Integration - Camel K distributions Distribution Description Location Operator image Container image for the Red Hat Integration - Camel K Operator: integration/camel-k-rhel8-operator OpenShift web console under Operators OperatorHub registry.redhat.io Maven repository Maven artifacts for Red Hat Integration - Camel K Red Hat provides Maven repositories that host the content we ship with our products. These repositories are available to download from the software downloads page. For Red Hat Integration - Camel K the following repositories are required: rhi-common rhi-camel-quarkus rhi-camel-k Installation of Red Hat Integration - Camel K in a disconnected environment (offline mode) is not supported. Software downloads of Red Hat build of Apache Camel Source code Source code for Red Hat Integration - Camel K Software downloads of Red Hat build of Apache Camel Quickstarts Quick start tutorials: Basic Java integration Event streaming integration JDBC integration JMS integration Kafka integration Knative integration SaaS integration Serverless API integration Transformations integration https://github.com/openshift-integration Note You must have a subscription for Red Hat build of Apache Camel K and be logged into the Red Hat Customer Portal to access the Red Hat Integration - Camel K distributions. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/getting_started_with_camel_k/introduction-to-camel-k |
Chapter 3. Deploying the Load-balancing service in an existing environment | Chapter 3. Deploying the Load-balancing service in an existing environment Deploying the Load-balancing service (octavia) to an existing Red Hat OpenStack Services on OpenShift (RHOSO) environment consists of performing the required steps for network configuration and security and then deploying the Load-balancing service in the RHOSO control plane. Overview You must perform the steps in the following procedures to deploy the Load-balancing service (octavia): Add the Load-balancing service interface to the configuration policy . Add the network attachment definition for the load-balancing management network . Create a CA passphrase for certificate generation and signing . Deploy the Load-balancing service . Important The steps in these procedures provide sample values that you add to the required CRs. The actual values that you provide will depend on your particular hardware configuration and local networking policies. 3.1. Adding the Load-balancing service interface to the configuration policy You begin the process of deploying the Load-balancing service (octavia) in a pre-existing Red Hat OpenStack Services on OpenShift (RHOSO) environment by adding the required interface to the node network configuration policy ( nncp ). Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. You have a unique VLAN ID for the load-balancing management network, lb-mgmt-net . Procedure Add the VLAN interface as a port to the octbr bridge. Performing this step enables pods connected to the octavia network attachment to communicate with pods running on other worker nodes. Because the interface is a VLAN, it isolates the load-balancing management network from other networks that might share the same base interface. Example In this example, the base interface name used is enp6s0 and the VLAN ID used is 24 . Replace these values with ones that are appropriate for your environment. For more information, see Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift . Validation Confirm that the interface was successfully added, by running the following command: USD oc get nncp -n openstack Sample output When successful, you should see output similar to the following: NAME STATUS REASON worker-0 Available SuccessfullyConfigured worker-1 Available SuccessfullyConfigured worker-2 Available SuccessfullyConfigured steps Proceed to Section 3.2, "Adding the network attachment definition of the load-balancing management network" . 3.2. Adding the network attachment definition of the load-balancing management network Adding the network attachment definition for the load-balancing management network is a required step for deploying the Load-balancing service (octavia) in a Red Hat OpenStack Services on OpenShift (RHOSO) environment. The octavia network attachment is required to connect pods that manage load balancer virtual machines (amphorae) and the Open vSwitch pods that are managed by the OVN operator. RHOSO uses the podified Open vSwitch instance to implement the route between the provider network and the management project (tenant) network. This attachment must be a bridgeable interface in the Open vSwitch pod, and must permit communication between other pods on the same node. The bridge attachment type, in conjunction with the VLAN interface added to the bridge in the NodeNetworkConfigurationPolicy , creates the necessary layer 2 link that enables connectivity across the nodes. Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Add the network attachment definition. Example In the following example, a network attachment definition is added to a definition file named, octavia-net-attach-def.yaml . Important The IP addresses and CIDRs defined in the network attachment definition are used only on a private network and VLAN. The values shown in this example are only for demonstration purposes. Use values that are appropriate for your environment. cat >> octavia-net-attach-def.yaml << EOF_CAT apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: octavia name: octavia namespace: openstack spec: config: | { "cniVersion": "0.3.1", "name": "octavia", "type": "bridge", "bridge": "octbr", "ipam": { "type": "whereabouts", "range": "192.0.2.0/16", "range_start": "192.0.2.30", "range_end": "192.0.2.70", "routes": [ { "dst": "192.1.2.0/16", "gw" : "192.0.2.150" } ] } } EOF_CAT oc apply -n openstack -f octavia-net-attach-def.yaml Verification Confirm that the network attachment for the load-balancing management network is successful. oc get net-attach-def -n openstack Sample output When successful, you should see octavia displayed in the network name list: steps Proceed to Section 3.3, "Creating a CA passphrase for certificate generation and signing" . 3.3. Creating a CA passphrase for certificate generation and signing In Red Hat OpenStack Services on OpenShift (RHOSO) environments, you create a Secret custom resource (CR) which is used to encrypt the generated private key of the Server CA. RHOSO uses dual CAs to make communication between the Load balancing service (octavia) amphora and its controller more secure. Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Generate a Base64, encoded password. Retain the encoded output to use in a later step. Example In this example, the password, my_password is encoded using the Base64 encoding scheme: USD echo -n my_password | base64 Create a Secret CR file on your workstation, for example, octavia-ca-passphrase.yaml . Add the following configuration to octavia-ca-passphrase.yaml : apiVersion: v1 data: server-ca-passphrase: <Base64_password> kind: Secret metadata: name: octavia-ca-passphrase namespace: openstack type: Opaque Replace the <Base64_password> with the Base64-encoded password that you created earlier. Create the Secret CR in the cluster: USD oc create -f octavia-ca-passphrase.yaml Verification Confirm that the Secret CR exists: USD oc describe secret octavia-ca-passphrase -n openstack steps Proceed to Section 3.4, "Deploying the Load-balancing service" . 3.4. Deploying the Load-balancing service To deploy the Red Hat OpenStack Services on OpenShift (RHOSO) Load-balancing service (octavia), you must configure the OVN controller to create a NIC mapping for the provider network as well as add it to the networkAttachments property for each Load-balancing service that controls load balancers (amphorae). Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. Procedure Update the OpenStackControlPlane custom resource with the required values for the Load-balancing service. Important In the following example, controlplane is the name of the OpenStackControlPlane custom resource (CR). Use the correct name for your OpenStackControlPlane CR. The value for nicMappings must be octavia: octbr . Note the one-character difference between the OVN networkAttachment property and the octavia networkAttachments property. Verification Confirm that the Load-balancing service (octavia) pods are running: USD oc get pods | grep octavia Sample output You should see output similar to the following. The number of entries and their suffixes will vary depending on the details of your environment: octavia-api-5cf9bc78f7-4lmds 2/2 Running 0 42h octavia-healthmanager-5g94j 1/1 Running 0 21h octavia-housekeeping-5gtw8 1/1 Running 0 21h octavia-image-upload-78b4b6c47c-xzdtl 1/1 Running 0 35h octavia-worker-pq55m 1/1 Running 0 21h Access the remote shell for the OpenStackClient pod from your workstation: Confirm that the networks octavia-provider-net and lb-mgmt-net are present: Sample output The network, octavia-provider-net , is the external provider network, and is limited to the RHOSO control plane. The lb-mgmt-net network connects the Load-balancing service to amphora instances. Exit the openstackclient pod: USD exit | [
"get -n openstack --no-headers nncp | cut -f 1 -d ' ' | while read ; do oc patch -n openstack nncp USDREPLY --type=merge --patch ' spec: desiredState: interfaces: - description: Octavia vlan host interface name: enp6s0.24 state: up type: vlan vlan: base-iface: enp6s0 id: 24 - bridge: options: stp: enabled: false port: - name: enp6s0.24 description: Configuring bridge octbr mtu: 1500 name: octbr state: up type: linux-bridge ' done",
"oc get nncp -n openstack",
"NAME STATUS REASON worker-0 Available SuccessfullyConfigured worker-1 Available SuccessfullyConfigured worker-2 Available SuccessfullyConfigured",
"cat >> octavia-net-attach-def.yaml << EOF_CAT apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: labels: osp/net: octavia name: octavia namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"octavia\", \"type\": \"bridge\", \"bridge\": \"octbr\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.0/16\", \"range_start\": \"192.0.2.30\", \"range_end\": \"192.0.2.70\", \"routes\": [ { \"dst\": \"192.1.2.0/16\", \"gw\" : \"192.0.2.150\" } ] } } EOF_CAT apply -n openstack -f octavia-net-attach-def.yaml",
"get net-attach-def -n openstack",
"NAME AGE ctlplane 25m datacentre 25m internalapi 25m octavia 25m storage 25m tenant 25m",
"echo -n my_password | base64",
"apiVersion: v1 data: server-ca-passphrase: <Base64_password> kind: Secret metadata: name: octavia-ca-passphrase namespace: openstack type: Opaque",
"oc create -f octavia-ca-passphrase.yaml",
"oc describe secret octavia-ca-passphrase -n openstack",
"patch -n openstack openstackcontrolplane controlplane --type=merge --patch ' spec: ovn: template: ovnController: networkAttachment: tenant nicMappings: octavia: octbr octavia: enabled: true template: octaviaHousekeeping: networkAttachments: - octavia octaviaHealthManager: networkAttachments: - octavia octaviaWorker: networkAttachments: - octavia '",
"oc get pods | grep octavia",
"octavia-api-5cf9bc78f7-4lmds 2/2 Running 0 42h octavia-healthmanager-5g94j 1/1 Running 0 21h octavia-housekeeping-5gtw8 1/1 Running 0 21h octavia-image-upload-78b4b6c47c-xzdtl 1/1 Running 0 35h octavia-worker-pq55m 1/1 Running 0 21h",
"oc rsh -n openstack openstackclient",
"openstack network list -f yaml",
"- ID: 2e4fc309-546b-4ac8-9eae-aa8d70a27a9b Name: octavia-provider-net Subnets: - eea45073-6e56-47fd-9153-12f7f49bc115 - ID: 77881d3f-04b0-46cb-931f-d54003cce9f0 Name: lb-mgmt-net Subnets: - e4ab96af-8077-4971-baa4-e0d40a16f55a",
"exit"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_load_balancing_as_a_service/deploy-lb-service_rhoso-lbaas |
Chapter 3. Prerequisites | Chapter 3. Prerequisites Before you can install and register Red Hat Satellite and Capsule, you must set up accounts with Amazon Web Services (AWS) and create and start Red Hat Enterprise Linux instances on AWS. 3.1. Amazon Web Service Assumptions To use this guide, you must have a working knowledge of the following aspects of Amazon Web Services: Creating and accessing Red Hat Enterprise Linux images in AWS Editing network access in AWS Security Creating EC2 instances and how to create EBS volumes Launching instances Importing and exporting virtual machines in AWS Using AWS Direct Connect To install Satellite in an AWS environment, you must ensure that your AWS set up meets the System Requirements in Installing Satellite Server in a Connected Network Environment . To install Capsule in an AWS environment, you must ensure that your AWS set up meets the System Requirements in Installing Capsule Server . For more information about Amazon Web Services and terminology, see Amazon Elastic Compute Cloud Documentation . For more information about Amazon Web Services Direct Connect, see What is AWS Direct Connect? 3.2. Red Hat Cloud prerequisites To use this guide, you must complete the following steps: Register with Red Hat Cloud Access. Migrate any Red Hat subscriptions that you want to use. Create an AWS instance and deploy a Red Hat Enterprise Linux virtual machine to the instance. Ensure that your subscriptions are eligible for transfer to Red Hat Cloud. For more information, see Red Hat Cloud Access Program Details . For more information about deploying Red Hat Enterprise Linux in AWS, see How to Locate Red Hat Cloud Access Gold Images on AWS EC2 . 3.3. Red Hat Satellite-specific prerequisites Ensure that the Amazon EC2 instance type meets or exceeds the System Requirements in Installing Satellite Server in a Connected Network Environment . For the best performance, use an AWS storage optimized instance . Use Storage Requirements in Installing Satellite Server in a Connected Network Environment to understand and assign the correct storage to your AWS EBS volumes. Store the synced content on an EBS volume that is separate to the boot volume. Mount the synced content EBS volume separately in the operating system. Optional: store other data on a separate EBS volume. If you want Satellite Server and Capsule Server to communicate using external DNS hostnames, open the required ports for communication in the AWS Security Group that is associated with the instance. 3.4. Preparing for the Red Hat Satellite Installation In your AWS environment, complete the following steps: Launch an EC2 instance of a Red Hat Enterprise Linux AMI. Connect to the newly created instance. If you use a Red Hat Gold Image, remove the RHUI client and set the enabled parameter in the product-id.conf to 1 . | [
"yum remove -y rh-amazon-rhui-client* yum clean all cat << EOF > /etc/yum/pluginconf.d/product-id.conf > [main] > enabled=1 > EOF"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/deploying_red_hat_satellite_on_amazon_web_services/prerequisites |
function::euid | function::euid Name function::euid - Return the effective uid of a target process Synopsis Arguments None Description Returns the effective user ID of the target process. | [
"euid:long()"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-euid |
Chapter 3. Post-deployment IPv6 operations | Chapter 3. Post-deployment IPv6 operations After you deploy the overcloud with IPv6 networking, you must perform some additional configuration. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . 3.1. Creating an IPv6 project network on the overcloud The overcloud requires an IPv6-based Project network for instances. Source the overcloudrc file and create an initial Project network in neutron . Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. Procedure Source the overcloud credentials file: Create a network and subnet: This creates a basic neutron network called default . Verification steps Verify that the network was created successfully: 3.2. Creating an IPv6 public network on the overcloud After you configure the node interfaces to use the External network, you must create this network on the overcloud to enable network access. Prerequisites A successful undercloud installation. For more information, see Installing director on the undercloud . A successful overcloud deployment. For more information, see Creating a basic overcloud with CLI tools . Your network supports IPv6-native VLANs as well as IPv4-native VLANs. Procedure Create an external network and subnet: This command creates a network called public that provides an allocation pool of over 65000 IPv6 addresses for our instances. Create a router to route instance traffic to the External network. | [
"source ~/overcloudrc",
"openstack network create default --external --provider-physical-network datacentre --provider-network-type vlan --provider-segment 101 openstack subnet create default --subnet-range 2001:db8:fd00:6000::/64 --ipv6-address-mode slaac --ipv6-ra-mode slaac --ip-version 6 --network default",
"openstack network list openstack subnet list",
"openstack network create public --external --provider-physical-network datacentre --provider-network-type vlan --provider-segment 100 openstack subnet create public --network public --subnet-range 2001:db8:0:2::/64 --ip-version 6 --gateway 2001:db8::1 --allocation-pool start=2001:db8:0:2::2,end=2001:db8:0:2::ffff --ipv6-address-mode slaac --ipv6-ra-mode slaac",
"openstack router create public-router openstack router set public-router --external-gateway public"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/ipv6_networking_for_the_overcloud/assembly_post-deployment-ipv6-operations |
Chapter 2. Node [v1] | Chapter 2. Node [v1] Description Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NodeSpec describes the attributes that a node is created with. status object NodeStatus is information about the current status of a node. 2.1.1. .spec Description NodeSpec describes the attributes that a node is created with. Type object Property Type Description configSource object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 externalID string Deprecated. Not all kubelets will set this field. Remove field after 1.13. see: https://issues.k8s.io/61966 podCIDR string PodCIDR represents the pod IP range assigned to the node. podCIDRs array (string) podCIDRs represents the IP ranges assigned to the node for usage by Pods on that node. If this field is specified, the 0th entry must match the podCIDR field. It may contain at most 1 value for each of IPv4 and IPv6. providerID string ID of the node assigned by the cloud provider in the format: <ProviderName>://<ProviderSpecificNodeID> taints array If specified, the node's taints. taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. unschedulable boolean Unschedulable controls node schedulability of new pods. By default, node is schedulable. More info: https://kubernetes.io/docs/concepts/nodes/node/#manual-node-administration 2.1.2. .spec.configSource Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.3. .spec.configSource.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.4. .spec.taints Description If specified, the node's taints. Type array 2.1.5. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required key effect Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Required. The taint key to be applied to a node. timeAdded Time TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 2.1.6. .status Description NodeStatus is information about the current status of a node. Type object Property Type Description addresses array List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. addresses[] object NodeAddress contains information for the node's address. allocatable object (Quantity) Allocatable represents the resources of a node that are available for scheduling. Defaults to Capacity. capacity object (Quantity) Capacity represents the total resources of a node. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity conditions array Conditions is an array of current observed node conditions. More info: https://kubernetes.io/docs/concepts/nodes/node/#condition conditions[] object NodeCondition contains condition information for a node. config object NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource. daemonEndpoints object NodeDaemonEndpoints lists ports opened by daemons running on the Node. images array List of container images on this node images[] object Describe a container image nodeInfo object NodeSystemInfo is a set of ids/uuids to uniquely identify the node. phase string NodePhase is the recently observed lifecycle phase of the node. More info: https://kubernetes.io/docs/concepts/nodes/node/#phase The field is never populated, and now is deprecated. Possible enum values: - "Pending" means the node has been created/added by the system, but not configured. - "Running" means the node has been configured and has Kubernetes components running. - "Terminated" means the node has been removed from the cluster. volumesAttached array List of volumes that are attached to the node. volumesAttached[] object AttachedVolume describes a volume attached to a node volumesInUse array (string) List of attachable volumes in use (mounted) by the node. 2.1.7. .status.addresses Description List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. Type array 2.1.8. .status.addresses[] Description NodeAddress contains information for the node's address. Type object Required type address Property Type Description address string The node address. type string Node address type, one of Hostname, ExternalIP or InternalIP. 2.1.9. .status.conditions Description Conditions is an array of current observed node conditions. More info: https://kubernetes.io/docs/concepts/nodes/node/#condition Type array 2.1.10. .status.conditions[] Description NodeCondition contains condition information for a node. Type object Required type status Property Type Description lastHeartbeatTime Time Last time we got an update on a given condition. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of node condition. 2.1.11. .status.config Description NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource. Type object Property Type Description active object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 assigned object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 error string Error describes any problems reconciling the Spec.ConfigSource to the Active config. Errors may occur, for example, attempting to checkpoint Spec.ConfigSource to the local Assigned record, attempting to checkpoint the payload associated with Spec.ConfigSource, attempting to load or validate the Assigned config, etc. Errors may occur at different points while syncing config. Earlier errors (e.g. download or checkpointing errors) will not result in a rollback to LastKnownGood, and may resolve across Kubelet retries. Later errors (e.g. loading or validating a checkpointed config) will result in a rollback to LastKnownGood. In the latter case, it is usually possible to resolve the error by fixing the config assigned in Spec.ConfigSource. You can find additional information for debugging by searching the error message in the Kubelet log. Error is a human-readable description of the error state; machines can check whether or not Error is empty, but should not rely on the stability of the Error text across Kubelet versions. lastKnownGood object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 2.1.12. .status.config.active Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.13. .status.config.active.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.14. .status.config.assigned Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.15. .status.config.assigned.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.16. .status.config.lastKnownGood Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.17. .status.config.lastKnownGood.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.18. .status.daemonEndpoints Description NodeDaemonEndpoints lists ports opened by daemons running on the Node. Type object Property Type Description kubeletEndpoint object DaemonEndpoint contains information about a single Daemon endpoint. 2.1.19. .status.daemonEndpoints.kubeletEndpoint Description DaemonEndpoint contains information about a single Daemon endpoint. Type object Required Port Property Type Description Port integer Port number of the given endpoint. 2.1.20. .status.images Description List of container images on this node Type array 2.1.21. .status.images[] Description Describe a container image Type object Property Type Description names array (string) Names by which this image is known. e.g. ["kubernetes.example/hyperkube:v1.0.7", "cloud-vendor.registry.example/cloud-vendor/hyperkube:v1.0.7"] sizeBytes integer The size of the image in bytes. 2.1.22. .status.nodeInfo Description NodeSystemInfo is a set of ids/uuids to uniquely identify the node. Type object Required machineID systemUUID bootID kernelVersion osImage containerRuntimeVersion kubeletVersion kubeProxyVersion operatingSystem architecture Property Type Description architecture string The Architecture reported by the node bootID string Boot ID reported by the node. containerRuntimeVersion string ContainerRuntime Version reported by the node through runtime remote API (e.g. containerd://1.4.2). kernelVersion string Kernel Version reported by the node from 'uname -r' (e.g. 3.16.0-0.bpo.4-amd64). kubeProxyVersion string KubeProxy Version reported by the node. kubeletVersion string Kubelet Version reported by the node. machineID string MachineID reported by the node. For unique machine identification in the cluster this field is preferred. Learn more from man(5) machine-id: http://man7.org/linux/man-pages/man5/machine-id.5.html operatingSystem string The Operating System reported by the node osImage string OS Image reported by the node from /etc/os-release (e.g. Debian GNU/Linux 7 (wheezy)). systemUUID string SystemUUID reported by the node. For unique machine identification MachineID is preferred. This field is specific to Red Hat hosts https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/rhsm/uuid 2.1.23. .status.volumesAttached Description List of volumes that are attached to the node. Type array 2.1.24. .status.volumesAttached[] Description AttachedVolume describes a volume attached to a node Type object Required name devicePath Property Type Description devicePath string DevicePath represents the device path where the volume should be available name string Name of the attached volume 2.2. API endpoints The following API endpoints are available: /api/v1/nodes DELETE : delete collection of Node GET : list or watch objects of kind Node POST : create a Node /api/v1/watch/nodes GET : watch individual changes to a list of Node. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/nodes/{name} DELETE : delete a Node GET : read the specified Node PATCH : partially update the specified Node PUT : replace the specified Node /api/v1/watch/nodes/{name} GET : watch changes to an object of kind Node. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/nodes/{name}/status GET : read status of the specified Node PATCH : partially update status of the specified Node PUT : replace status of the specified Node 2.2.1. /api/v1/nodes Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Node Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Node Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK NodeList schema 401 - Unauthorized Empty HTTP method POST Description create a Node Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.8. Body parameters Parameter Type Description body Node schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 202 - Accepted Node schema 401 - Unauthorized Empty 2.2.2. /api/v1/watch/nodes Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Node. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /api/v1/nodes/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the Node Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Node Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Node Table 2.17. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Node Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Node Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. Body parameters Parameter Type Description body Node schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty 2.2.4. /api/v1/watch/nodes/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the Node Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Node. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.5. /api/v1/nodes/{name}/status Table 2.27. Global path parameters Parameter Type Description name string name of the Node Table 2.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Node Table 2.29. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Node Table 2.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.31. Body parameters Parameter Type Description body Patch schema Table 2.32. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Node Table 2.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.34. Body parameters Parameter Type Description body Node schema Table 2.35. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/node_apis/node-v1 |
Chapter 2. Eclipse Temurin features | Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes included in the latest OpenJDK 17 release of Eclipse Temurin, see OpenJDK 17.0.4 Released . New features and enhancements Review the following release notes to understand new features and feature enhancements included with the Eclipse Temurin 17.0.4 release: HTTPS channel binding support for Java Generic Security Services (GSS) or Kerberos The OpenJDK 17.0.4 release supports TLS channel binding tokens when Negotiate selects Kerberos authentication over HTTPS by using javax.net.HttpsURLConnection . Channel binding tokens are required, because of man in the middle (MITM) attacks. A channel binding token is an enhanced form of security that can mitigate certain kinds of socially engineered attacks. A MITM operates by communicating from a client to a server. A client creates a connection between security, such as TLS server certificate, and higher-level authentication credentials, such as a username and a password. The server detects if a MITM has implicated a client, so the server shut downs the connection. The jdk.https.negotiate.cbt system property controls this feature. See, Misc HTTP URL stream protocol handler properties (Oracle documentation) . See, JDK-8285240 (JDK Bug System) . Incorrect handling of quoted arguments in ProcessBuilder Before the OpenJDK 17.0.4 release, arguments to ProcessBuilder on Microsoft Windows that contained opening double quotation marks ("), a backslash (\), and closing double quotation marks ("), caused the command to fail. For example, the command prompt on Microsoft Windows would not correctly process the argument "C:\\Program Files\" , because the argument contained closing double quotation marks. The OpenJDK 17.0.4 release resolves this issue by restoring any arguments that contained double quotation marks to ProcessBuilder to an earlier required behavior. ProcessBuilder no longer applies any special treatment to an argument that includes a backslash (\) before the closing double quotation marks. See, JDK-8283137 (JDK Bug System) . Default JDK compressor closes when IOException is encountered The OpenJDK 17.0.4 release, modifies the DeflaterOutputStream.close() and GZIPOutputStream.finish() methods. This update closes the associated default JDK compressor before propagating a Throwable class to a stack. The release also modifies the ZIPOutputStream.closeEntry() method. This update closes the associated default JDK compressor before propagating an IOException message, not of type ZipException , to a stack. See, JDK-8278386 (JDK Bug System) . New system property to disable Microsoft Windows Alternate Data Stream support in java.io.File The Microsoft Windows implementation of java.io.File provides access to NTFS Alternate Data Streams (ADS) by default. These streams follow the format filename:streamname . The OpenJDK 17.0.4 release adds a system property. With this system property, you can disable ADS support in java.io.File , by setting the system property jdk.io.File.enableADS to false . Important Disabling ADS support in java.io.File results in stricter path checking that prevents the use of special device files, such as NUL: . See, JDK-8285660 (JDK Bug System) . Revised on 2024-05-03 15:35:22 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.4/openjdk-temurin-features-17-0-4_openjdk |
Chapter 3. Authentication [config.openshift.io/v1] | Chapter 3. Authentication [config.openshift.io/v1] Description Authentication specifies cluster-wide settings for authentication (like OAuth and webhook token authenticators). The canonical name of an instance is cluster . Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 3.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description oauthMetadata object oauthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for an external OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 If oauthMetadata.name is non-empty, this value has precedence over any metadata reference stored in status. The key "oauthMetadata" is used to locate the data. If specified and the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config. serviceAccountIssuer string serviceAccountIssuer is the identifier of the bound service account token issuer. The default is https://kubernetes.default.svc WARNING: Updating this field will not result in immediate invalidation of all bound tokens with the issuer value. Instead, the tokens issued by service account issuer will continue to be trusted for a time period chosen by the platform (currently set to 24h). This time period is subject to change over time. This allows internal components to transition to use new service account issuer without service distruption. type string type identifies the cluster managed, user facing authentication mode in use. Specifically, it manages the component that responds to login attempts. The default is IntegratedOAuth. webhookTokenAuthenticator object webhookTokenAuthenticator configures a remote token reviewer. These remote authentication webhooks can be used to verify bearer tokens via the tokenreviews.authentication.k8s.io REST API. This is required to honor bearer tokens that are provisioned by an external authentication service. webhookTokenAuthenticators array webhookTokenAuthenticators is DEPRECATED, setting it has no effect. webhookTokenAuthenticators[] object deprecatedWebhookTokenAuthenticator holds the necessary configuration options for a remote token authenticator. It's the same as WebhookTokenAuthenticator but it's missing the 'required' validation on KubeConfig field. 3.1.2. .spec.oauthMetadata Description oauthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for an external OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 If oauthMetadata.name is non-empty, this value has precedence over any metadata reference stored in status. The key "oauthMetadata" is used to locate the data. If specified and the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 3.1.3. .spec.webhookTokenAuthenticator Description webhookTokenAuthenticator configures a remote token reviewer. These remote authentication webhooks can be used to verify bearer tokens via the tokenreviews.authentication.k8s.io REST API. This is required to honor bearer tokens that are provisioned by an external authentication service. Type object Required kubeConfig Property Type Description kubeConfig object kubeConfig references a secret that contains kube config file data which describes how to access the remote webhook service. The namespace for the referenced secret is openshift-config. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. 3.1.4. .spec.webhookTokenAuthenticator.kubeConfig Description kubeConfig references a secret that contains kube config file data which describes how to access the remote webhook service. The namespace for the referenced secret is openshift-config. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 3.1.5. .spec.webhookTokenAuthenticators Description webhookTokenAuthenticators is DEPRECATED, setting it has no effect. Type array 3.1.6. .spec.webhookTokenAuthenticators[] Description deprecatedWebhookTokenAuthenticator holds the necessary configuration options for a remote token authenticator. It's the same as WebhookTokenAuthenticator but it's missing the 'required' validation on KubeConfig field. Type object Property Type Description kubeConfig object kubeConfig contains kube config file data which describes how to access the remote webhook service. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. The namespace for this secret is determined by the point of use. 3.1.7. .spec.webhookTokenAuthenticators[].kubeConfig Description kubeConfig contains kube config file data which describes how to access the remote webhook service. For further details, see: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication The key "kubeConfig" is used to locate the data. If the secret or expected key is not found, the webhook is not honored. If the specified kube config data is not valid, the webhook is not honored. The namespace for this secret is determined by the point of use. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 3.1.8. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description integratedOAuthMetadata object integratedOAuthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for the in-cluster integrated OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 This contains the observed value based on cluster state. An explicitly set value in spec.oauthMetadata has precedence over this field. This field has no meaning if authentication spec.type is not set to IntegratedOAuth. The key "oauthMetadata" is used to locate the data. If the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config-managed. 3.1.9. .status.integratedOAuthMetadata Description integratedOAuthMetadata contains the discovery endpoint data for OAuth 2.0 Authorization Server Metadata for the in-cluster integrated OAuth server. This discovery document can be viewed from its served location: oc get --raw '/.well-known/oauth-authorization-server' For further details, see the IETF Draft: https://tools.ietf.org/html/draft-ietf-oauth-discovery-04#section-2 This contains the observed value based on cluster state. An explicitly set value in spec.oauthMetadata has precedence over this field. This field has no meaning if authentication spec.type is not set to IntegratedOAuth. The key "oauthMetadata" is used to locate the data. If the config map or expected key is not found, no metadata is served. If the specified metadata is not valid, no metadata is served. The namespace for this config map is openshift-config-managed. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 3.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/authentications DELETE : delete collection of Authentication GET : list objects of kind Authentication POST : create an Authentication /apis/config.openshift.io/v1/authentications/{name} DELETE : delete an Authentication GET : read the specified Authentication PATCH : partially update the specified Authentication PUT : replace the specified Authentication /apis/config.openshift.io/v1/authentications/{name}/status GET : read status of the specified Authentication PATCH : partially update status of the specified Authentication PUT : replace status of the specified Authentication 3.2.1. /apis/config.openshift.io/v1/authentications Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Authentication Table 3.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Authentication Table 3.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.5. HTTP responses HTTP code Reponse body 200 - OK AuthenticationList schema 401 - Unauthorized Empty HTTP method POST Description create an Authentication Table 3.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.7. Body parameters Parameter Type Description body Authentication schema Table 3.8. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 202 - Accepted Authentication schema 401 - Unauthorized Empty 3.2.2. /apis/config.openshift.io/v1/authentications/{name} Table 3.9. Global path parameters Parameter Type Description name string name of the Authentication Table 3.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Authentication Table 3.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.12. Body parameters Parameter Type Description body DeleteOptions schema Table 3.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Authentication Table 3.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.15. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Authentication Table 3.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.17. Body parameters Parameter Type Description body Patch schema Table 3.18. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Authentication Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.20. Body parameters Parameter Type Description body Authentication schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty 3.2.3. /apis/config.openshift.io/v1/authentications/{name}/status Table 3.22. Global path parameters Parameter Type Description name string name of the Authentication Table 3.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Authentication Table 3.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.25. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Authentication Table 3.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.27. Body parameters Parameter Type Description body Patch schema Table 3.28. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Authentication Table 3.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.30. Body parameters Parameter Type Description body Authentication schema Table 3.31. HTTP responses HTTP code Reponse body 200 - OK Authentication schema 201 - Created Authentication schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/config_apis/authentication-config-openshift-io-v1 |
Chapter 3. Bug fixes | Chapter 3. Bug fixes In this release of Red Hat Trusted Profile Analyzer (RHTPA), we fixed the following bugs. In addition to these fixes, we list the descriptions of previously known issues found in earlier versions that we fixed. Fixed an inconsistency when a CVE has many CVSS scores Before this update, vulnerabilities with many Common Vulnerability Scoring System (CVSS) scores were inconsistently displayed when applying a filter. This was happening because the first CVSS score ordered the initial list of vulnerabilities, but the second score reordered the same list when applying a filter giving an inconsistent list of vulnerabilities. With this release, we fixed this order inconsistency by always applying the highest score when ordering the list of vulnerabilities, even when applying a filter. This gives consistency to the vulnerabilities list. Changed the strategy type for deploying the spog-api and the collectorist-api in OpenShift Before this update, the default strategy type for deploying the spog-api and the collectorist-api in OpenShift was a rolling strategy. Using the rolling strategy when deploying these 2 APIs mounts a volume with a ReadWriteOnce policy. This causes the pods to fail when redeploying the RHTPA application, because the rolling strategy does not scale down, and the volume is in use by the existing pods. With this release, we changed the default strategy from rolling to recreate for the spog-api and the collectorist-api pods. Vulnerability count mismatch Before this update, there is a vulnerability count mismatch between the Common Vulnerability and Exposures (CVE) panel and the Software Bill of Materials (SBOM) dashboard. With this release, we fixed the vulnerability count mismatch between the CVE panel and the SBOM dashboard. Duplicate SBOMs displayed in the RHTPA console We fixed a bug when retrieving data from the Graph for Understanding Artifact Composition (GUAC) engine by implementing a proper identification for packages that use a hash within software bill of materials (SBOM) documents. This fix eliminates the showing of any duplicate SBOMs when referring to the same SBOM. Errors with cyclical dependencies within SBOM documents Some software bill of materials (SBOM) documents contain cyclical dependencies for packages, which was causing errors with the expected data. We fixed a bug with the Graph for Understanding Artifact Composition (GUAC) engine, so the graph is properly traversed from a package to the product it belongs to. With this update, the package details page reports the correct product association. SBOM data does not load properly when uploading a large SBOM Before this update, when uploading a large software bill of materials (SBOM) documents, for example an SBOM that includes 50,000 packages, the RHTPA dashboard does not load properly. With this release, we fixed an issue with Keycloak's access token expiring before the SBOM can finish uploading its data. Uploading large SBOM document work as expected and display properly in the RHTPA dashboard. | null | https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1.3/html/release_notes/bug-fixes |
Chapter 98. Netty HTTP | Chapter 98. Netty HTTP Since Camel 2.14 Both producer and consumer are supported The Netty HTTP component is an extension to Netty component to facilitiate HTTP transport with Netty. NOTE Stream Netty is stream based, which means the input it receives is submitted to Camel as a stream. That means you will only be able to read the content of the stream once . If you find a situation where the message body appears to be empty or you need to access the data multiple times (eg: doing multicasting, or redelivery error handling) you should use Stream caching or convert the message body to a String which is safe to be re-read multiple times. Notice Netty HTTP reads the entire stream into memory using io.netty.handler.codec.http.HttpObjectAggregator to build the entire full http message. But the resulting message is still a stream based message which is readable once. 98.1. Dependencies When using netty-http with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-netty-http-starter</artifactId> </dependency> 98.2. URI format The URI scheme for a netty component is as follows NOTE Query parameters vs endpoint options You may be wondering how Camel recognizes URI query parameters and endpoint options. For example you might create endpoint URI as follows: netty-http:http//example.com?myParam=myValue&compression=true . In this example myParam is the HTTP parameter, while compression is the Camel endpoint option. The strategy used by Camel in such situations is to resolve available endpoint options and remove them from the URI. It means that for the discussed example, the HTTP request sent by Netty HTTP producer to the endpoint will look as follows: http//example.com?myParam=myValue , because compression endpoint option will be resolved and removed from the target URL. Keep also in mind that you cannot specify endpoint options using dynamic headers (like CamelHttpQuery ). Endpoint options can be specified only at the endpoint URI definition level (like to or from DSL elements). Important This component inherits all the options from Netty . Notice that some options from Netty are not applicable when using this Netty HTTP component, such as options related to UDP transport. 98.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 98.3.1. Configuring component options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 98.3.2. Configuring endpoint options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 98.4. Component Options The Netty HTTP component supports 80 options, which are listed below. Name Description Default Type configuration (common) To use the NettyConfiguration as configuration when creating endpoints. NettyConfiguration disconnect (common) Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false boolean keepAlive (common) Setting to ensure socket is not closed due to inactivity. true boolean reuseAddress (common) Setting to facilitate socket multiplexing. true boolean reuseChannel (common) This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false boolean sync (common) Setting to set endpoint as one-way or request-response. true boolean tcpNoDelay (common) Setting to improve TCP protocol performance. true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean broadcast (consumer) Setting to choose Multicast over UDP. false boolean clientMode (consumer) If the clientMode is true, netty consumer will connect the address as a TCP client. false boolean muteException (consumer) If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. false boolean reconnect (consumer) Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. true boolean reconnectInterval (consumer) Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. 10000 int backlog (consumer (advanced)) Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. int bossCount (consumer (advanced)) When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 int bossGroup (consumer (advanced)) Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. EventLoopGroup disconnectOnNoReply (consumer (advanced)) If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true boolean executorService (consumer (advanced)) To use the given EventExecutorGroup. EventExecutorGroup maximumPoolSize (consumer (advanced)) Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected. int nettyServerBootstrapFactory (consumer (advanced)) To use a custom NettyServerBootstrapFactory. NettyServerBootstrapFactory networkInterface (consumer (advanced)) When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. String noReplyLogLevel (consumer (advanced)) If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverClosedChannelExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel serverExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an exception then its logged using this logging level. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverInitializerFactory (consumer (advanced)) To use a custom ServerInitializerFactory. ServerInitializerFactory usingExecutorService (consumer (advanced)) Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true boolean connectTimeout (producer) Time to wait for a socket connection to be available. Value is in milliseconds. 10000 int lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean requestTimeout (producer) Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. long clientInitializerFactory (producer (advanced)) To use a custom ClientInitializerFactory. ClientInitializerFactory correlationManager (producer (advanced)) To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. NettyCamelStateCorrelationManager lazyChannelCreation (producer (advanced)) Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true boolean producerPoolBlockWhenExhausted (producer (advanced)) Sets the value for the blockWhenExhausted configuration attribute. It determines whether to block when the borrowObject() method is invoked when the pool is exhausted (the maximum number of active objects has been reached). true boolean producerPoolEnabled (producer (advanced)) Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true boolean producerPoolMaxIdle (producer (advanced)) Sets the cap on the number of idle instances in the pool. 100 int producerPoolMaxTotal (producer (advanced)) Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 int producerPoolMaxWait (producer (advanced)) Sets the maximum duration (value in millis) the borrowObject() method should block before throwing an exception when the pool is exhausted and producerPoolBlockWhenExhausted is true. When less than 0, the borrowObject() method may block indefinitely. -1 long producerPoolMinEvictableIdle (producer (advanced)) Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 long producerPoolMinIdle (producer (advanced)) Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. int udpConnectionlessSending (producer (advanced)) This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. false boolean useByteBuf (producer (advanced)) If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. false boolean hostnameVerification ( security) To enable/disable hostname verification on SSLEngine. false boolean allowSerializedHeaders (advanced) Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean channelGroup (advanced) To use a explicit ChannelGroup. ChannelGroup headerFilterStrategy (advanced) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter headers. HeaderFilterStrategy nativeTransport (advanced) Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false boolean nettyHttpBinding (advanced) To use a custom org.apache.camel.component.netty.http.NettyHttpBinding for binding to/from Netty and Camel Message API. NettyHttpBinding options (advanced) Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map receiveBufferSize (advanced) The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 int receiveBufferSizePredictor (advanced) Configures the buffer size predictor. See details at Jetty documentation and this mail thread. int sendBufferSize (advanced) The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 int transferExchange (advanced) Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean udpByteArrayCodec (advanced) For UDP only. If enabled the using byte array codec instead of Java serialization protocol. false boolean unixDomainSocketPath (advanced) Path to unix domain socket to use instead of inet socket. Host and port parameters will not be used, however required. It is ok to set dummy values for them. Must be used with nativeTransport=true and clientMode=false. String workerCount (advanced) When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. int workerGroup (advanced) To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. EventLoopGroup allowDefaultCodec (codec) The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. true boolean autoAppendDelimiter (codec) Whether or not to auto append missing end delimiter when sending using the textline codec. true boolean decoderMaxLineLength (codec) The max line length to use for the textline codec. 1024 int decoders (codec) A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String delimiter (codec) The delimiter to use for the textline codec. Possible values are LINE and NULL. Enum values: LINE NULL LINE TextLineDelimiter encoders (codec) A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String encoding (codec) The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. String textline (codec) Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. false boolean enabledProtocols (security) Which protocols to enable when using SSL. TLSv1.2,TLSv1.3 String keyStoreFile (security) Client side certificate keystore to be used for encryption. File keyStoreFormat (security) Keystore format to be used for payload encryption. Defaults to JKS if not set. String keyStoreResource (security) Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String needClientAuth (security) Configures whether the server needs client authentication when using SSL. false boolean passphrase (security) Password setting to use in order to encrypt/decrypt payloads sent using SSH. String securityConfiguration (security) Refers to a org.apache.camel.component.netty.http.NettyHttpSecurityConfiguration for configuring secure web resources. NettyHttpSecurityConfiguration securityProvider (security) Security provider to be used for payload encryption. Defaults to SunX509 if not set. String ssl (security) Setting to specify whether SSL encryption is applied to this endpoint. false boolean sslClientCertHeaders (security) When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false boolean sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters sslHandler (security) Reference to a class that could be used to return an SSL Handler. SslHandler trustStoreFile (security) Server side certificate keystore to be used for encryption. File trustStoreResource (security) Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 98.5. Endpoint Options The Netty HTTP endpoint is configured using URI syntax: with the following path and query parameters: 98.5.1. Path Parameters (4 parameters) Name Description Default Type protocol (common) Required The protocol to use which is either http, https or proxy - a consumer only option. Enum values: http https String host (common) Required The local hostname such as localhost, or 0.0.0.0 when being a consumer. The remote HTTP server hostname when using producer. String port (common) The host port number. int path (common) Resource path. String 98.5.2. Query Parameters (85 parameters) Name Description Default Type bridgeEndpoint (common) If the option is true, the producer will ignore the NettyHttpConstants.HTTP_URI header, and use the endpoint's URI for request. You may also set the throwExceptionOnFailure to be false to let the producer send all the fault response back. The consumer working in the bridge mode will skip the gzip compression and WWW URL form encoding (by adding the Exchange.SKIP_GZIP_ENCODING and Exchange.SKIP_WWW_FORM_URLENCODED headers to the consumed exchange). false boolean disconnect (common) Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false boolean keepAlive (common) Setting to ensure socket is not closed due to inactivity. true boolean reuseAddress (common) Setting to facilitate socket multiplexing. true boolean reuseChannel (common) This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false boolean sync (common) Setting to set endpoint as one-way or request-response. true boolean tcpNoDelay (common) Setting to improve TCP protocol performance. true boolean matchOnUriPrefix (consumer) Whether or not Camel should try to find a target consumer by matching the URI prefix if no exact match is found. false boolean muteException (consumer) If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. false boolean send503whenSuspended (consumer) Whether to send back HTTP status code 503 when the consumer has been suspended. If the option is false then the Netty Acceptor is unbound when the consumer is suspended, so clients cannot connect anymore. true boolean backlog (consumer (advanced)) Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. int bossCount (consumer (advanced)) When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 int bossGroup (consumer (advanced)) Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. EventLoopGroup bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean chunkedMaxContentLength (consumer (advanced)) Value in bytes the max content length per chunked frame received on the Netty HTTP server. 1048576 int compression (consumer (advanced)) Allow using gzip/deflate for compression on the Netty HTTP server if the client supports it from the HTTP headers. false boolean disconnectOnNoReply (consumer (advanced)) If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern httpMethodRestrict (consumer (advanced)) To disable HTTP methods on the Netty HTTP consumer. You can specify multiple separated by comma. String logWarnOnBadRequest (consumer (advanced)) Whether Netty HTTP server should log a WARN if decoding the HTTP request failed and a HTTP Status 400 (bad request) is returned. true boolean mapHeaders (consumer (advanced)) If this option is enabled, then during binding from Netty to Camel Message then the headers will be mapped as well (eg added as header to the Camel Message as well). You can turn off this option to disable this. The headers can still be accessed from the org.apache.camel.component.netty.http.NettyHttpMessage message with the method getHttpRequest() that returns the Netty HTTP request io.netty.handler.codec.http.HttpRequest instance. true boolean maxChunkSize (consumer (advanced)) The maximum length of the content or each chunk. If the content length (or the length of each chunk) exceeds this value, the content or chunk will be split into multiple io.netty.handler.codec.http.HttpContents whose length is maxChunkSize at maximum. See io.netty.handler.codec.http.HttpObjectDecoder. 8192 int maxHeaderSize (consumer (advanced)) The maximum length of all headers. If the sum of the length of each header exceeds this value, a io.netty.handler.codec.TooLongFrameException will be raised. 8192 int maxInitialLineLength (consumer (advanced)) The maximum length of the initial line (e.g. \\{code GET / HTTP/1.0} or \\{code HTTP/1.0 200 OK}) If the length of the initial line exceeds this value, a TooLongFrameException will be raised. See io.netty.handler.codec.http.HttpObjectDecoder. 4096 int nettyServerBootstrapFactory (consumer (advanced)) To use a custom NettyServerBootstrapFactory. NettyServerBootstrapFactory nettySharedHttpServer (consumer (advanced)) To use a shared Netty HTTP server. See Netty HTTP Server Example for more details. NettySharedHttpServer noReplyLogLevel (consumer (advanced)) If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverClosedChannelExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel serverExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an exception then its logged using this logging level. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverInitializerFactory (consumer (advanced)) To use a custom ServerInitializerFactory. ServerInitializerFactory traceEnabled (consumer (advanced)) Specifies whether to enable HTTP TRACE for this Netty HTTP consumer. By default TRACE is turned off. false boolean urlDecodeHeaders (consumer (advanced)) If this option is enabled, then during binding from Netty to Camel Message then the header values will be URL decoded (eg %20 will be a space character. Notice this option is used by the default org.apache.camel.component.netty.http.NettyHttpBinding and therefore if you implement a custom org.apache.camel.component.netty.http.NettyHttpBinding then you would need to decode the headers accordingly to this option. false boolean usingExecutorService (consumer (advanced)) Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true boolean connectTimeout (producer) Time to wait for a socket connection to be available. Value is in milliseconds. 10000 int cookieHandler (producer) Configure a cookie handler to maintain a HTTP session. CookieHandler requestTimeout (producer) Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. long throwExceptionOnFailure (producer) Option to disable throwing the HttpOperationFailedException in case of failed responses from the remote server. This allows you to get all responses regardless of the HTTP status code. true boolean clientInitializerFactory (producer (advanced)) To use a custom ClientInitializerFactory. ClientInitializerFactory lazyChannelCreation (producer (advanced)) Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean okStatusCodeRange (producer (advanced)) The status codes which are considered a success response. The values are inclusive. Multiple ranges can be defined, separated by comma, e.g. 200-204,209,301-304. Each range must be a single number or from-to with the dash included. The default range is 200-299. 200-299 String producerPoolBlockWhenExhausted (producer (advanced)) Sets the value for the blockWhenExhausted configuration attribute. It determines whether to block when the borrowObject() method is invoked when the pool is exhausted (the maximum number of active objects has been reached). true boolean producerPoolEnabled (producer (advanced)) Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true boolean producerPoolMaxIdle (producer (advanced)) Sets the cap on the number of idle instances in the pool. 100 int producerPoolMaxTotal (producer (advanced)) Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 int producerPoolMaxWait (producer (advanced)) Sets the maximum duration (value in millis) the borrowObject() method should block before throwing an exception when the pool is exhausted and producerPoolBlockWhenExhausted is true. When less than 0, the borrowObject() method may block indefinitely. -1 long producerPoolMinEvictableIdle (producer (advanced)) Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 long producerPoolMinIdle (producer (advanced)) Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. int useRelativePath (producer (advanced)) Sets whether to use a relative path in HTTP requests. true boolean hostnameVerification ( security) To enable/disable hostname verification on SSLEngine. false boolean allowSerializedHeaders (advanced) Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false boolean channelGroup (advanced) To use a explicit ChannelGroup. ChannelGroup configuration (advanced) To use a custom configured NettyHttpConfiguration for configuring this endpoint. NettyHttpConfiguration disableStreamCache (advanced) Determines whether or not the raw input stream from Netty HttpRequest#getContent() or HttpResponset#getContent() is cached or not (Camel will read the stream into a in light-weight memory based Stream caching) cache. By default Camel will cache the Netty input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. Mind that if you enable this option, then you cannot read the Netty stream multiple times out of the box, and you would need manually to reset the reader index on the Netty raw stream. Also Netty will auto-close the Netty stream when the Netty HTTP server/HTTP client is done processing, which means that if the asynchronous routing engine is in use then any asynchronous thread that may continue routing the org.apache.camel.Exchange may not be able to read the Netty stream, because Netty has closed it. false boolean headerFilterStrategy (advanced) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter headers. HeaderFilterStrategy nativeTransport (advanced) Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false boolean nettyHttpBinding (advanced) To use a custom org.apache.camel.component.netty.http.NettyHttpBinding for binding to/from Netty and Camel Message API. NettyHttpBinding options (advanced) Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map receiveBufferSize (advanced) The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 int receiveBufferSizePredictor (advanced) Configures the buffer size predictor. See details at Jetty documentation and this mail thread. int sendBufferSize (advanced) The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 int synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferException (advanced) If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean transferExchange (advanced) Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean unixDomainSocketPath (advanced) Path to unix domain socket to use instead of inet socket. Host and port parameters will not be used, however required. It is ok to set dummy values for them. Must be used with nativeTransport=true and clientMode=false. String workerCount (advanced) When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. int workerGroup (advanced) To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. EventLoopGroup decoders (codec) A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String encoders (codec) A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String enabledProtocols (security) Which protocols to enable when using SSL. TLSv1.2,TLSv1.3 String keyStoreFile (security) Client side certificate keystore to be used for encryption. File keyStoreFormat (security) Keystore format to be used for payload encryption. Defaults to JKS if not set. String keyStoreResource (security) Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String needClientAuth (security) Configures whether the server needs client authentication when using SSL. false boolean passphrase (security) Password setting to use in order to encrypt/decrypt payloads sent using SSH. String securityConfiguration (security) Refers to a org.apache.camel.component.netty.http.NettyHttpSecurityConfiguration for configuring secure web resources. NettyHttpSecurityConfiguration securityOptions (security) To configure NettyHttpSecurityConfiguration using key/value pairs from the map. Map securityProvider (security) Security provider to be used for payload encryption. Defaults to SunX509 if not set. String ssl (security) Setting to specify whether SSL encryption is applied to this endpoint. false boolean sslClientCertHeaders (security) When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false boolean sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters sslHandler (security) Reference to a class that could be used to return an SSL Handler. SslHandler trustStoreFile (security) Server side certificate keystore to be used for encryption. File trustStoreResource (security) Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String 98.6. Message Headers The Netty HTTP component supports 23 message header(s), which is/are listed below: Name Description Default Type CamelHttpAuthentication (common) Constant: HTTP_AUTHENTICATION If the user was authenticated using HTTP Basic then this header is added with the value Basic. String Content-Type (common) Constant: CONTENT_TYPE To set the content-type of the HTTP body. For example: text/plain; charset=UTF-8. String connection (common) Constant: CONNECTION The value of the HTTP header connection to use. String CamelNettyCloseChannelWhenComplete (common) Constant: NETTY_CLOSE_CHANNEL_WHEN_COMPLETE Indicates whether the channel should be closed after complete. Boolean CamelHttpResponseCode (common) Constant: HTTP_RESPONSE_CODE Allows to set the HTTP Status code to use. By default 200 is used for success, and 500 for failure. Integer CamelHttpProtocolVersion (common) Constant: HTTP_PROTOCOL_VERSION The version of the HTTP protocol. HTTP/1.1 String CamelHttpMethod (common) Constant: HTTP_METHOD The HTTP method used, such as GET, POST, TRACE etc. GET String CamelHttpQuery (common) Constant: HTTP_QUERY Any query parameters, such as foo=bar&beer=yes. String CamelHttpPath (common) Constant: HTTP_PATH Allows to provide URI context-path and query parameters as a String value that overrides the endpoint configuration. This allows to reuse the same producer for calling same remote http server, but using a dynamic context-path and query parameters. String CamelHttpRawQuery (common) Constant: HTTP_RAW_QUERY Any query parameters, such as foo=bar&beer=yes. Stored in the raw form, as they arrived to the consumer (i.e. before URL decoding). String CamelHttpUrl (common) Constant: HTTP_URL The URL including protocol, host and port, etc:http://0.0.0.0:8080/myapp . String CamelHttpCharacterEncoding (common) Constant: HTTP_CHARACTER_ENCODING The charset from the content-type header. String CamelHttpUri (common) Constant: HTTP_URI The URI without protocol, host and port, etc: /myapp. String CamelNettyChannelHandlerContext (common) Constant: NETTY_CHANNEL_HANDLER_CONTEXT The channel handler context. ChannelHandlerContext CamelNettyRemoteAddress (common) Constant: NETTY_REMOTE_ADDRESS The remote address. SocketAddress CamelNettyLocalAddress (common) Constant: NETTY_LOCAL_ADDRESS The local address. SocketAddress CamelNettySSLSession (common) Constant: NETTY_SSL_SESSION The SSL session. SSLSession CamelNettySSLClientCertSubjectName (common) Constant: NETTY_SSL_CLIENT_CERT_SUBJECT_NAME The SSL client certificate subject name. String CamelNettySSLClientCertIssuerName (common) Constant: NETTY_SSL_CLIENT_CERT_ISSUER_NAME The SSL client certificate issuer name. String CamelNettySSLClientCertSerialNumber (common) Constant: NETTY_SSL_CLIENT_CERT_SERIAL_NO The SSL client certificate serial number. String CamelNettySSLClientCertNotBefore (common) Constant: NETTY_SSL_CLIENT_CERT_NOT_BEFORE The SSL client certificate not before. Date CamelNettySSLClientCertNotAfter (common) Constant: NETTY_SSL_CLIENT_CERT_NOT_AFTER The SSL client certificate not after. Date CamelNettyRequestTimeout (common) Constant: NETTY_REQUEST_TIMEOUT The read timeout. Long 98.7. Access to Netty types This component uses the org.apache.camel.component.netty.http.NettyHttpMessage as the message implementation on the Exchange. This allows end users to get access to the original Netty request/response instances if needed, as shown below. Mind that the original response may not be accessible at all times. io.netty.handler.codec.http.HttpRequest request = exchange.getIn(NettyHttpMessage.class).getHttpRequest(); 98.8. Examples In the route below we use Netty HTTP as a HTTP server, which returns back a hardcoded "Bye World" message. from("netty-http:http://0.0.0.0:8080/foo") .transform().constant("Bye World"); And we can call this HTTP server using Camel also, with the ProducerTemplate as shown below: String out = template.requestBody("netty-http:http://0.0.0.0:8080/foo", "Hello World", String.class); System.out.println(out); And we get back "Bye World" as the output. 98.8.1. How do I let Netty match wildcards By default Netty HTTP will only match on exact uri's. But you can instruct Netty to match prefixes. For example from("netty-http:http://0.0.0.0:8123/foo").to("mock:foo"); In the route above Netty HTTP will only match if the uri is an exact match, so it will match if you enter http://0.0.0.0:8123/foo but not match if you do http://0.0.0.0:8123/foo/bar . So if you want to enable wildcard matching you do as follows: from("netty-http:http://0.0.0.0:8123/foo?matchOnUriPrefix=true").to("mock:foo"); So now Netty matches any endpoints with starts with foo . To match any endpoint you can do: from("netty-http:http://0.0.0.0:8123?matchOnUriPrefix=true").to("mock:foo"); 98.8.2. Using multiple routes with same port In the same CamelContext you can have multiple routes from Netty HTTP that shares the same port (eg a io.netty.bootstrap.ServerBootstrap instance). Doing this requires a number of bootstrap options to be identical in the routes, as the routes will share the same io.netty.bootstrap.ServerBootstrap instance. The instance will be configured with the options from the first route created. The options the routes must be identical configured is all the options defined in the org.apache.camel.component.netty.NettyServerBootstrapConfiguration configuration class. If you have configured another route with different options, Camel will throw an exception on startup, indicating the options is not identical. To mitigate this ensure all options is identical. Here is an example with two routes that share the same port. Two routes sharing the same port from("netty-http:http://0.0.0.0:{{port}}/foo") .to("mock:foo") .transform().constant("Bye World"); from("netty-http:http://0.0.0.0:{{port}}/bar") .to("mock:bar") .transform().constant("Bye Camel"); And here is an example of a misconfigured 2nd route that do not have identical org.apache.camel.component.netty.NettyServerBootstrapConfiguration option as the 1st route. This will cause Camel to fail on startup. Two routes sharing the same port, but the 2nd route is misconfigured and will fail on starting from("netty-http:http://0.0.0.0:{{port}}/foo") .to("mock:foo") .transform().constant("Bye World"); // we cannot have a 2nd route on same port with SSL enabled, when the 1st route is NOT from("netty-http:http://0.0.0.0:{{port}}/bar?ssl=true") .to("mock:bar") .transform().constant("Bye Camel"); 98.8.3. Reusing same server bootstrap configuration with multiple routes By configuring the common server bootstrap option in an single instance of a org.apache.camel.component.netty.NettyServerBootstrapConfiguration type, we can use the bootstrapConfiguration option on the Netty HTTP consumers to refer and reuse the same options across all consumers. <bean id="nettyHttpBootstrapOptions" class="org.apache.camel.component.netty.NettyServerBootstrapConfiguration"> <property name="backlog" value="200"/> <property name="connectionTimeout" value="20000"/> <property name="workerCount" value="16"/> </bean> And in the routes you refer to this option as shown below <route> <from uri="netty-http:http://0.0.0.0:{{port}}/foo?bootstrapConfiguration=#nettyHttpBootstrapOptions"/> ... </route> <route> <from uri="netty-http:http://0.0.0.0:{{port}}/bar?bootstrapConfiguration=#nettyHttpBootstrapOptions"/> ... </route> <route> <from uri="netty-http:http://0.0.0.0:{{port}}/beer?bootstrapConfiguration=#nettyHttpBootstrapOptions"/> ... </route> 98.8.4. Reusing same server bootstrap configuration with multiple routes across multiple bundles in OSGi container See the above Netty HTTP Server Example for more details and example how to do that. 98.8.5. Implementing a reverse proxy Netty HTTP component can act as a reverse proxy, in that case Exchange.HTTP_SCHEME , Exchange.HTTP_HOST and Exchange.HTTP_PORT headers are populated from the absolute URL received on the request line of the HTTP request. Here's an example of a HTTP proxy that simply transforms the response from the origin server to uppercase. from("netty-http:proxy://0.0.0.0:8080") .toD("netty-http:" + "USD{headers." + Exchange.HTTP_SCHEME + "}://" + "USD{headers." + Exchange.HTTP_HOST + "}:" + "USD{headers." + Exchange.HTTP_PORT + "}") .process(this::processResponse); void processResponse(final Exchange exchange) { final NettyHttpMessage message = exchange.getIn(NettyHttpMessage.class); final FullHttpResponse response = message.getHttpResponse(); final ByteBuf buf = response.content(); final String string = buf.toString(StandardCharsets.UTF_8); buf.resetWriterIndex(); ByteBufUtil.writeUtf8(buf, string.toUpperCase(Locale.US)); } 98.9. Using HTTP Basic Authentication The Netty HTTP consumer supports HTTP basic authentication by specifying the security realm name to use, as shown below <route> <from uri="netty-http:http://0.0.0.0:{{port}}/foo?securityConfiguration.realm=karaf"/> ... </route> The realm name is mandatory to enable basic authentication. By default the JAAS based authenticator is used, which will use the realm name specified (karaf in the example above) and use the JAAS realm and the JAAS \\{{LoginModule}}s of this realm for authentication. End user of Apache Karaf / ServiceMix has a karaf realm out of the box, and hence why the example above would work out of the box in these containers. 98.9.1. Specifying ACL on web resources The org.apache.camel.component.netty.http.SecurityConstraint allows to define constrains on web resources. The org.apache.camel.component.netty.http.SecurityConstraintMapping is provided out of the box, allowing to easily define inclusions and exclusions with roles. For example as shown below in the XML DSL, we define the constraint bean: <bean id="constraint" class="org.apache.camel.component.netty.http.SecurityConstraintMapping"> <!-- inclusions defines url -> roles restrictions --> <!-- a * should be used for any role accepted (or even no roles) --> <property name="inclusions"> <map> <entry key="/*" value="*"/> <entry key="/admin/*" value="admin"/> <entry key="/guest/*" value="admin,guest"/> </map> </property> <!-- exclusions is used to define public urls, which requires no authentication --> <property name="exclusions"> <set> <value>/public/*</value> </set> </property> </bean> The constraint above is define so that access to /* is restricted and any roles is accepted (also if user has no roles) access to /admin/* requires the admin role access to /guest/* requires the admin or guest role access to /public/* is an exclusion which means no authentication is needed, and is therefore public for everyone without logging in To use this constraint we just need to refer to the bean id as shown below: <route> <from uri="netty-http:http://0.0.0.0:{{port}}/foo?matchOnUriPrefix=true&securityConfiguration.realm=karaf&securityConfiguration.securityConstraint=#constraint"/> ... </route> 98.10. Spring Boot Auto-Configuration The component supports 67 options, which are listed below. Name Description Default Type camel.component.netty-http.allow-serialized-headers Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.netty-http.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.netty-http.backlog Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. Integer camel.component.netty-http.boss-count When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 Integer camel.component.netty-http.boss-group Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. The option is a io.netty.channel.EventLoopGroup type. EventLoopGroup camel.component.netty-http.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.netty-http.channel-group To use a explicit ChannelGroup. The option is a io.netty.channel.group.ChannelGroup type. ChannelGroup camel.component.netty-http.client-initializer-factory To use a custom ClientInitializerFactory. The option is a org.apache.camel.component.netty.ClientInitializerFactory type. ClientInitializerFactory camel.component.netty-http.configuration To use the NettyConfiguration as configuration when creating endpoints. The option is a org.apache.camel.component.netty.NettyConfiguration type. NettyConfiguration camel.component.netty-http.connect-timeout Time to wait for a socket connection to be available. Value is in milliseconds. 10000 Integer camel.component.netty-http.correlation-manager To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. The option is a org.apache.camel.component.netty.NettyCamelStateCorrelationManager type. NettyCamelStateCorrelationManager camel.component.netty-http.decoders A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String camel.component.netty-http.disconnect Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false Boolean camel.component.netty-http.disconnect-on-no-reply If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true Boolean camel.component.netty-http.enabled Whether to enable auto configuration of the netty-http component. This is enabled by default. Boolean camel.component.netty-http.enabled-protocols Which protocols to enable when using SSL. TLSv1.2,TLSv1.3 String camel.component.netty-http.encoders A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String camel.component.netty-http.executor-service To use the given EventExecutorGroup. The option is a io.netty.util.concurrent.EventExecutorGroup type. EventExecutorGroup camel.component.netty-http.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter headers. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.netty-http.hostname-verification To enable/disable hostname verification on SSLEngine. false Boolean camel.component.netty-http.keep-alive Setting to ensure socket is not closed due to inactivity. true Boolean camel.component.netty-http.key-store-file Client side certificate keystore to be used for encryption. File camel.component.netty-http.key-store-format Keystore format to be used for payload encryption. Defaults to JKS if not set. String camel.component.netty-http.key-store-resource Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.netty-http.lazy-channel-creation Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true Boolean camel.component.netty-http.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.netty-http.maximum-pool-size Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected. Integer camel.component.netty-http.mute-exception If enabled and an Exchange failed processing on the consumer side the response's body won't contain the exception's stack trace. false Boolean camel.component.netty-http.native-transport Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false Boolean camel.component.netty-http.need-client-auth Configures whether the server needs client authentication when using SSL. false Boolean camel.component.netty-http.netty-http-binding To use a custom org.apache.camel.component.netty.http.NettyHttpBinding for binding to/from Netty and Camel Message API. The option is a org.apache.camel.component.netty.http.NettyHttpBinding type. NettyHttpBinding camel.component.netty-http.netty-server-bootstrap-factory To use a custom NettyServerBootstrapFactory. The option is a org.apache.camel.component.netty.NettyServerBootstrapFactory type. NettyServerBootstrapFactory camel.component.netty-http.no-reply-log-level If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. LoggingLevel camel.component.netty-http.options Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map camel.component.netty-http.passphrase Password setting to use in order to encrypt/decrypt payloads sent using SSH. String camel.component.netty-http.producer-pool-block-when-exhausted Sets the value for the blockWhenExhausted configuration attribute. It determines whether to block when the borrowObject() method is invoked when the pool is exhausted (the maximum number of active objects has been reached). true Boolean camel.component.netty-http.producer-pool-enabled Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true Boolean camel.component.netty-http.producer-pool-max-idle Sets the cap on the number of idle instances in the pool. 100 Integer camel.component.netty-http.producer-pool-max-total Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 Integer camel.component.netty-http.producer-pool-max-wait Sets the maximum duration (value in millis) the borrowObject() method should block before throwing an exception when the pool is exhausted and producerPoolBlockWhenExhausted is true. When less than 0, the borrowObject() method may block indefinitely. -1 Long camel.component.netty-http.producer-pool-min-evictable-idle Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 Long camel.component.netty-http.producer-pool-min-idle Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. Integer camel.component.netty-http.receive-buffer-size The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 Integer camel.component.netty-http.receive-buffer-size-predictor Configures the buffer size predictor. See details at Jetty documentation and this mail thread. Integer camel.component.netty-http.request-timeout Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. Long camel.component.netty-http.reuse-address Setting to facilitate socket multiplexing. true Boolean camel.component.netty-http.reuse-channel This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false Boolean camel.component.netty-http.security-configuration Refers to a org.apache.camel.component.netty.http.NettyHttpSecurityConfiguration for configuring secure web resources. The option is a org.apache.camel.component.netty.http.NettyHttpSecurityConfiguration type. NettyHttpSecurityConfiguration camel.component.netty-http.security-provider Security provider to be used for payload encryption. Defaults to SunX509 if not set. String camel.component.netty-http.send-buffer-size The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 Integer camel.component.netty-http.server-closed-channel-exception-caught-log-level If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. LoggingLevel camel.component.netty-http.server-exception-caught-log-level If the server (NettyConsumer) catches an exception then its logged using this logging level. LoggingLevel camel.component.netty-http.server-initializer-factory To use a custom ServerInitializerFactory. The option is a org.apache.camel.component.netty.ServerInitializerFactory type. ServerInitializerFactory camel.component.netty-http.ssl Setting to specify whether SSL encryption is applied to this endpoint. false Boolean camel.component.netty-http.ssl-client-cert-headers When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false Boolean camel.component.netty-http.ssl-context-parameters To configure security using SSLContextParameters. The option is a org.apache.camel.support.jsse.SSLContextParameters type. SSLContextParameters camel.component.netty-http.ssl-handler Reference to a class that could be used to return an SSL Handler. The option is a io.netty.handler.ssl.SslHandler type. SslHandler camel.component.netty-http.sync Setting to set endpoint as one-way or request-response. true Boolean camel.component.netty-http.tcp-no-delay Setting to improve TCP protocol performance. true Boolean camel.component.netty-http.transfer-exchange Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.netty-http.trust-store-file Server side certificate keystore to be used for encryption. File camel.component.netty-http.trust-store-resource Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.netty-http.unix-domain-socket-path Path to unix domain socket to use instead of inet socket. Host and port parameters will not be used, however required. It is ok to set dummy values for them. Must be used with nativeTransport=true and clientMode=false. String camel.component.netty-http.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.netty-http.using-executor-service Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true Boolean camel.component.netty-http.worker-count When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. Integer camel.component.netty-http.worker-group To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. The option is a io.netty.channel.EventLoopGroup type. EventLoopGroup | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-netty-http-starter</artifactId> </dependency>",
"netty-http:http://0.0.0.0:8080[?options]",
"netty-http:protocol://host:port/path",
"io.netty.handler.codec.http.HttpRequest request = exchange.getIn(NettyHttpMessage.class).getHttpRequest();",
"from(\"netty-http:http://0.0.0.0:8080/foo\") .transform().constant(\"Bye World\");",
"String out = template.requestBody(\"netty-http:http://0.0.0.0:8080/foo\", \"Hello World\", String.class); System.out.println(out);",
"from(\"netty-http:http://0.0.0.0:8123/foo\").to(\"mock:foo\");",
"from(\"netty-http:http://0.0.0.0:8123/foo?matchOnUriPrefix=true\").to(\"mock:foo\");",
"from(\"netty-http:http://0.0.0.0:8123?matchOnUriPrefix=true\").to(\"mock:foo\");",
"from(\"netty-http:http://0.0.0.0:{{port}}/foo\") .to(\"mock:foo\") .transform().constant(\"Bye World\"); from(\"netty-http:http://0.0.0.0:{{port}}/bar\") .to(\"mock:bar\") .transform().constant(\"Bye Camel\");",
"from(\"netty-http:http://0.0.0.0:{{port}}/foo\") .to(\"mock:foo\") .transform().constant(\"Bye World\"); // we cannot have a 2nd route on same port with SSL enabled, when the 1st route is NOT from(\"netty-http:http://0.0.0.0:{{port}}/bar?ssl=true\") .to(\"mock:bar\") .transform().constant(\"Bye Camel\");",
"<bean id=\"nettyHttpBootstrapOptions\" class=\"org.apache.camel.component.netty.NettyServerBootstrapConfiguration\"> <property name=\"backlog\" value=\"200\"/> <property name=\"connectionTimeout\" value=\"20000\"/> <property name=\"workerCount\" value=\"16\"/> </bean>",
"<route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/foo?bootstrapConfiguration=#nettyHttpBootstrapOptions\"/> </route> <route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/bar?bootstrapConfiguration=#nettyHttpBootstrapOptions\"/> </route> <route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/beer?bootstrapConfiguration=#nettyHttpBootstrapOptions\"/> </route>",
"from(\"netty-http:proxy://0.0.0.0:8080\") .toD(\"netty-http:\" + \"USD{headers.\" + Exchange.HTTP_SCHEME + \"}://\" + \"USD{headers.\" + Exchange.HTTP_HOST + \"}:\" + \"USD{headers.\" + Exchange.HTTP_PORT + \"}\") .process(this::processResponse); void processResponse(final Exchange exchange) { final NettyHttpMessage message = exchange.getIn(NettyHttpMessage.class); final FullHttpResponse response = message.getHttpResponse(); final ByteBuf buf = response.content(); final String string = buf.toString(StandardCharsets.UTF_8); buf.resetWriterIndex(); ByteBufUtil.writeUtf8(buf, string.toUpperCase(Locale.US)); }",
"<route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/foo?securityConfiguration.realm=karaf\"/> </route>",
"<bean id=\"constraint\" class=\"org.apache.camel.component.netty.http.SecurityConstraintMapping\"> <!-- inclusions defines url -> roles restrictions --> <!-- a * should be used for any role accepted (or even no roles) --> <property name=\"inclusions\"> <map> <entry key=\"/*\" value=\"*\"/> <entry key=\"/admin/*\" value=\"admin\"/> <entry key=\"/guest/*\" value=\"admin,guest\"/> </map> </property> <!-- exclusions is used to define public urls, which requires no authentication --> <property name=\"exclusions\"> <set> <value>/public/*</value> </set> </property> </bean>",
"<route> <from uri=\"netty-http:http://0.0.0.0:{{port}}/foo?matchOnUriPrefix=true&securityConfiguration.realm=karaf&securityConfiguration.securityConstraint=#constraint\"/> </route>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-netty-http-component-starter |
2.2. Types | 2.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following example creates a new file in the /var/www/html/ directory, and shows the file inheriting the httpd_sys_content_t type from its parent directory ( /var/www/html/ ): Run the ls -dZ /var/www/html command to view the SELinux context of /var/www/html/ : This shows /var/www/html/ is labeled with the httpd_sys_content_t type. Run the touch /var/www/html/file1 command as the root user to create a new file. Run the ls -Z /var/www/html/file1 command to view the SELinux context: The ls -Z command shows file1 labeled with the httpd_sys_content_t type. SELinux allows httpd to read files labeled with this type, but not write to them, even if Linux permissions allow write access. SELinux policy defines what types a process running in the httpd_t domain (where httpd runs) can read and write to. This helps prevent processes from accessing files intended for use by another process. For example, httpd can access files labeled with the httpd_sys_content_t type (intended for the Apache HTTP Server), but by default, cannot access files labeled with the samba_share_t type (intended for Samba). Also, files in user home directories are labeled with the user_home_t type: by default, this prevents httpd from reading or writing to files in user home directories. The following lists some of the types used with httpd . Different types allow you to configure flexible access: httpd_sys_content_t Use this type for static web content, such as .html files used by a static website. Files labeled with this type are accessible (read only) to httpd and scripts executed by httpd . By default, files and directories labeled with this type cannot be written to or modified by httpd or other processes. Note that by default, files created in or copied into /var/www/html/ are labeled with the httpd_sys_content_t type. httpd_sys_script_exec_t Use this type for scripts you want httpd to execute. This type is commonly used for Common Gateway Interface (CGI) scripts in /var/www/cgi-bin/ . By default, SELinux policy prevents httpd from executing CGI scripts. To allow this, label the scripts with the httpd_sys_script_exec_t type and enable the httpd_enable_cgi Boolean. Scripts labeled with httpd_sys_script_exec_t run in the httpd_sys_script_t domain when executed by httpd . The httpd_sys_script_t domain has access to other system domains, such as postgresql_t and mysqld_t . httpd_sys_rw_content_t Files labeled with this type can be written to by scripts labeled with the httpd_sys_script_exec_t type, but cannot be modified by scripts labeled with any other type. You must use the httpd_sys_rw_content_t type to label files that will be read from and written to by scripts labeled with the httpd_sys_script_exec_t type. httpd_sys_ra_content_t Files labeled with this type can be appended to by scripts labeled with the httpd_sys_script_exec_t type, but cannot be modified by scripts labeled with any other type. You must use the httpd_sys_ra_content_t type to label files that will be read from and appended to by scripts labeled with the httpd_sys_script_exec_t type. httpd_unconfined_script_exec_t Scripts labeled with this type run without SELinux protection. Only use this type for complex scripts, after exhausting all other options. It is better to use this type instead of disabling SELinux protection for httpd , or for the entire system. Note To see more of the available types for httpd, run the following command: Procedure 2.1. Changing the SELinux Context The type for files and directories can be changed with the chcon command. Changes made with chcon do not survive a file system relabel or the restorecon command. SELinux policy controls whether users are able to modify the SELinux context for any given file. The following example demonstrates creating a new directory and an index.html file for use by httpd , and labeling that file and directory to allow httpd access to them: Run the mkdir -p /my/website command as the root user to create a top-level directory structure to store files to be used by httpd . Files and directories that do not match a pattern in file-context configuration may be labeled with the default_t type. This type is inaccessible to confined services: Run the chcon -R -t httpd_sys_content_t /my/ command as the root user to change the type of the /my/ directory and subdirectories, to a type accessible to httpd . Now, files created under /my/website/ inherit the httpd_sys_content_t type, rather than the default_t type, and are therefore accessible to httpd: Refer to the Temporary Changes: chcon section of the Red Hat Enterprise Linux 6 SELinux User Guide for further information about chcon . Use the semanage fcontext command ( semanage is provided by the policycoreutils-python package) to make label changes that survive a relabel and the restorecon command. This command adds changes to file-context configuration. Then, run restorecon , which reads file-context configuration, to apply the label change. The following example demonstrates creating a new directory and an index.html file for use by httpd , and persistently changing the label of that directory and file to allow httpd access to them: Run the mkdir -p /my/website command as the root user to create a top-level directory structure to store files to be used by httpd . Run the following command as the root user to add the label change to file-context configuration: The "/my(/.*)?" expression means the label change applies to the /my/ directory and all files and directories under it. Run the touch /my/website/index.html command as the root user to create a new file. Run the restorecon -R -v /my/ command as the root user to apply the label changes ( restorecon reads file-context configuration, which was modified by the semanage command in step 2): Refer to the Persistent Changes: semanage fcontext section of the Red Hat Enterprise Linux SELinux User Guide for further information on semanage. | [
"~]USD ls -dZ /var/www/html drwxr-xr-x root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html",
"~]USD ls -Z /var/www/html/file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/file1",
"~]USD grep httpd /etc/selinux/targeted/contexts/files/file_contexts",
"~]USD ls -dZ /my drwxr-xr-x root root unconfined_u:object_r:default_t:s0 /my",
"~]# chcon -R -t httpd_sys_content_t /my/ ~]# touch /my/website/index.html ~]# ls -Z /my/website/index.html -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 /my/website/index.html",
"~]# semanage fcontext -a -t httpd_sys_content_t \"/my(/.*)?\"",
"~]# restorecon -R -v /my/ restorecon reset /my context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /my/website context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0 restorecon reset /my/website/index.html context unconfined_u:object_r:default_t:s0->system_u:object_r:httpd_sys_content_t:s0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/sect-managing_confined_services-the_apache_http_server-types |
Chapter 5. SelfSubjectRulesReview [authorization.openshift.io/v1] | Chapter 5. SelfSubjectRulesReview [authorization.openshift.io/v1] Description SelfSubjectRulesReview is a resource you can create to determine which actions you can perform in a namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds spec object SelfSubjectRulesReviewSpec adds information about how to conduct the check status object SubjectRulesReviewStatus is contains the result of a rules check 5.1.1. .spec Description SelfSubjectRulesReviewSpec adds information about how to conduct the check Type object Required scopes Property Type Description scopes array (string) Scopes to use for the evaluation. Empty means "use the unscoped (full) permissions of the user/groups". Nil means "use the scopes on this request". 5.1.2. .status Description SubjectRulesReviewStatus is contains the result of a rules check Type object Required rules Property Type Description evaluationError string EvaluationError can appear in combination with Rules. It means some error happened during evaluation that may have prevented additional rules from being populated. rules array Rules is the list of rules (no particular sort) that are allowed for the subject rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. 5.1.3. .status.rules Description Rules is the list of rules (no particular sort) that are allowed for the subject Type array 5.1.4. .status.rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs resources Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If this field is empty, then both kubernetes and origin API groups are assumed. That means that if an action is requested against one of the enumerated resources in either the kubernetes or the origin API group, the request will be allowed attributeRestrictions RawExtension AttributeRestrictions will vary depending on what the Authorizer/AuthorizationAttributeBuilder pair supports. If the Authorizer does not recognize how to handle the AttributeRestrictions, the Authorizer should report an error. nonResourceURLs array (string) NonResourceURLsSlice is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path This name is intentionally different than the internal type so that the DefaultConvert works nicely and because the ordering may be different. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. ResourceAll represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds and AttributeRestrictions contained in this rule. VerbAll represents all kinds. 5.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/namespaces/{namespace}/selfsubjectrulesreviews POST : create a SelfSubjectRulesReview 5.2.1. /apis/authorization.openshift.io/v1/namespaces/{namespace}/selfsubjectrulesreviews Table 5.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a SelfSubjectRulesReview Table 5.2. Body parameters Parameter Type Description body SelfSubjectRulesReview schema Table 5.3. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectRulesReview schema 201 - Created SelfSubjectRulesReview schema 202 - Accepted SelfSubjectRulesReview schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authorization_apis/selfsubjectrulesreview-authorization-openshift-io-v1 |
Chapter 17. TCP Wrappers and xinetd | Chapter 17. TCP Wrappers and xinetd Controlling access to network services is one of the most important security tasks facing a server administrator. Red Hat Enterprise Linux provides several tools which do just that. For instance, an iptables -based firewall filters out unwelcome network packets within the kernel's network stack. For network services that utilize it, TCP wrappers add an additional layer of protection by defining which hosts are or are not allowed to connect to " wrapped " network services. One such wrapped network service is the xinetd super server . This service is called a super server because it controls connections to a subset of network services and further refines access control. Figure 17.1, "Access Control to Network Services" is a basic illustration of how these tools work together to protect network services. Figure 17.1. Access Control to Network Services This chapter focuses on the role of TCP wrappers and xinetd in controlling access to network services and reviews how these tools can be used to enhance both logging and utilization management. For a discussion of using firewalls with iptables , refer to Chapter 18, iptables . 17.1. TCP Wrappers The TCP wrappers package ( tcp_wrappers ) is installed by default and provides host-based access control to network services. The most important component within the package is the /usr/lib/libwrap.a library. In general terms, a TCP wrapped service is one that has been compiled against the libwrap.a library. When a connection attempt is made to a TCP wrapped service, the service first references the hosts access files ( /etc/hosts.allow and /etc/hosts.deny ) to determine whether or not the client host is allowed to connect. In most cases, it then uses the syslog daemon ( syslogd ) to write the name of the requesting host and the requested service to /var/log/secure or /var/log/messages . If a client host is allowed to connect, TCP wrappers release control of the connection to the requested service and do not interfere further with communication between the client host and the server. In addition to access control and logging, TCP wrappers can activate commands to interact with the client before denying or releasing control of the connection to the requested network service. Because TCP wrappers are a valuable addition to any server administrator's arsenal of security tools, most network services within Red Hat Enterprise Linux are linked against the libwrap.a library. Some such applications include /usr/sbin/sshd , /usr/sbin/sendmail , and /usr/sbin/xinetd . Note To determine if a network service binary is linked against libwrap.a , type the following command as the root user: Replace <binary-name> with the name of the network service binary. If a prompt is returned, then the network service is not linked against libwrap.a . 17.1.1. Advantages of TCP Wrappers TCP wrappers provide the following advantages over other network service control techniques: Transparency to both the client host and the wrapped network service - Both the connecting client and the wrapped network service are unaware that TCP wrappers are in use. Legitimate users are logged and connected to the requested service while connections from banned clients fail. Centralized management of multiple protocols - TCP wrappers operate separately from the network services they protect, allowing many server applications to share a common set of configuration files for simpler management. | [
"ldd binary-name | grep libwrap"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/ch-tcpwrappers |
14.5. Samba Account Information Databases | 14.5. Samba Account Information Databases The latest release of Samba offers many new features including new password database backends not previously available. Samba version 3.0.0 fully supports all databases used in versions of Samba. However, although supported, many backends may not be suitable for production use. 14.5.1. Backward Compatible Backends Plain Text Plain text backends are nothing more than the /etc/passwd type backends. With a plain text backend, all usernames and passwords are sent unencrypted between the client and the Samba server. This method is very insecure and is not recommended for use by any means. It is possible that different Windows clients connecting to the Samba server with plain text passwords cannot support such an authentication method. smbpasswd A popular backend used in Samba packages, the smbpasswd backend utilizes a plain ASCII text layout that includes the MS Windows LanMan and NT account, and encrypted password information. The smbpasswd backend lacks the storage of the Windows NT/2000/2003 SAM extended controls. The smbpasswd backend is not recommended because it does not scale well or hold any Windows information, such as RIDs for NT-based groups. The tdbsam backend solves these issues for use in a smaller database (250 users), but is still not an enterprise-class solution. Warning This type of backend may be deprecated for future releases and replaced by the tdbsam backend, which does include the SAM extended controls. ldapsam_compat The ldapsam_compat backend allows continued OpenLDAP support for use with upgraded versions of Samba. This option is ideal for migration, but is not required. This tool will eventually be deprecated. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-samba-account-info-dbs |
8.2.3. Displaying Package Information | 8.2.3. Displaying Package Information To display information about one or more packages (glob expressions are valid here as well), use the following command: yum info package_name For example, to display information about the abrt package, type: The yum info package_name command is similar to the rpm -q --info package_name command, but provides as additional information the ID of the Yum repository the RPM package is found in (look for the From repo: line in the output). You can also query the Yum database for alternative and useful information about a package by using the following command: yumdb info package_name This command provides additional information about a package, including the check sum of the package (and algorithm used to produce it, such as SHA-256), the command given on the command line that was invoked to install the package (if any), and the reason that the package is installed on the system (where user indicates it was installed by the user, and dep means it was brought in as a dependency). For example, to display additional information about the yum package, type: For more information on the yumdb command, see the yumdb (8) manual page. Listing Files Contained in a Package repoquery is a program for querying information from yum repositories similarly to rpm queries. You can query both package groups and individual packages. To list all files contained in a specific package, type: repoquery --list package_name Replace package_name with a name of the package you want to inspect. For more information on the repoquery command, see the repoquery manual page. To find out which package provides a specific file, you can use the yum provides command, described in Finding which package owns a file | [
"~]# yum info abrt Loaded plugins: product-id, refresh-packagekit, subscription-manager Updating Red Hat repositories. INFO:rhsm-app.repolib:repos updated: 0 Installed Packages Name : abrt Arch : x86_64 Version : 1.0.7 Release : 5.el6 Size : 578 k Repo : installed From repo : rhel Summary : Automatic bug detection and reporting tool URL : https://fedorahosted.org/abrt/ License : GPLv2+ Description: abrt is a tool to help users to detect defects in applications : and to create a bug report with all informations needed by : maintainer to fix it. It uses plugin system to extend its : functionality.",
"~]# yumdb info yum Loaded plugins: product-id, refresh-packagekit, subscription-manager yum-3.2.27-4.el6.noarch checksum_data = 23d337ed51a9757bbfbdceb82c4eaca9808ff1009b51e9626d540f44fe95f771 checksum_type = sha256 from_repo = rhel from_repo_revision = 1298613159 from_repo_timestamp = 1298614288 installed_by = 4294967295 reason = user releasever = 6.1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Displaying_Package_Information |
Chapter 10. Disconnected environment | Chapter 10. Disconnected environment Disconnected environment is a network restricted environment where the Operator Lifecycle Manager (OLM) cannot access the default Operator Hub and image registries, which require internet connectivity. Red Hat supports deployment of OpenShift Data Foundation in disconnected environments where you have installed OpenShift Container Platform in restricted networks. To install OpenShift Data Foundation in a disconnected environment, see Using Operator Lifecycle Manager on restricted networks of the Operators guide in OpenShift Container Platform documentation. Note When you install OpenShift Data Foundation in a restricted network environment, apply a custom Network Time Protocol (NTP) configuration to the nodes, because by default, internet connectivity is assumed in OpenShift Container Platform and chronyd is configured to use the *.rhel.pool.ntp.org servers. For more information, see the Red Hat Knowledgebase solution A newly deployed OCS 4 cluster status shows as "Degraded", Why? and Configuring chrony time service of the Installing guide in OpenShift Container Platform documentation. Red Hat OpenShift Data Foundation version 4.12 introduced the Agent-based Installer for disconnected environment deployment. The Agent-based Installer allows you to use a mirror registry for disconnected installations. For more information, see Preparing to install with Agent-based Installer . Packages to include for OpenShift Data Foundation When you prune the redhat-operator index image, include the following list of packages for the OpenShift Data Foundation deployment: ocs-operator odf-operator mcg-operator odf-csi-addons-operator odr-cluster-operator odr-hub-operator Optional: local-storage-operator Only for local storage deployments. Optional: odf-multicluster-orchestrator Only for Regional Disaster Recovery (Regional-DR) configuration. Important Name the CatalogSource as redhat-operators . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/planning_your_deployment/disconnected-environment_rhodf |
3.5. Logging | 3.5. Logging All message output passes through a logging module with independent choices of logging levels for: standard output/error syslog log file external log function The logging levels are set in the /etc/lvm/lvm.conf file, which is described in Appendix B, The LVM Configuration Files . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/logging |
7.3. RHBA-2014:1498 - new packages: gdisk | 7.3. RHBA-2014:1498 - new packages: gdisk New gdisk packages are now available for Red Hat Enterprise Linux 6. The gdisk packages provide a fdisk-like partitioning tool for GPT disks. GPT fdisk features a command-line interface, fairly direct manipulation of partition table structures, recovery tools fort dealing with corrupt partition tables, and the ability to convert MBR disks to GPT format. This enhancement update adds the gdisk packages to Red Hat Enterprise Linux 6. (BZ# 1015157 ) All users who require gdisk are advised to install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rhba-2014-1498 |
Chapter 7. Creating infrastructure machine sets | Chapter 7. Creating infrastructure machine sets Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 7.1. OpenShift Container Platform infrastructure components The following infrastructure workloads do not incur OpenShift Container Platform worker subscriptions: Kubernetes and OpenShift Container Platform control plane services that run on masters The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Service brokers Red Hat Quay Red Hat OpenShift Container Storage Red Hat Advanced Cluster Manager Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information on infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 7.2. Creating infrastructure machine sets for production environments In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. A configuration like this requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. 7.2.1. Creating machine sets for different clouds Use the sample machine set for your cloud. 7.2.1.1. Sample YAML for a machine set custom resource on AWS This sample YAML defines a machine set that runs in the us-east-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: "" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: ami: id: ami-046fe691f52a953f9 11 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 12 instanceType: m4.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 13 region: <region> 14 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 15 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 16 tags: - name: kubernetes.io/cluster/<infrastructure_id> 17 value: owned userDataSecret: name: worker-user-data 1 3 5 12 15 17 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID, <infra> node label, and zone. 6 7 9 Specify the <infra> node label. 10 Specify a taint to prevent user workloads from being scheduled on infra nodes. 11 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) AMI for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-worker-<zone> 13 Specify the zone, for example, us-east-1a . 14 Specify the region, for example, us-east-1 . 16 Specify the infrastructure ID and zone. Machine sets running on AWS support non-guaranteed Spot Instances . You can save on costs by using Spot Instances at a lower price compared to On-Demand Instances on AWS. Configure Spot Instances by adding spotMarketOptions to the MachineSet YAML file. 7.2.1.2. Sample YAML for a machine set custom resource on Azure This sample YAML defines a machine set that runs in the 1 Microsoft Azure zone in a region and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 11 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 12 offer: "" publisher: "" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: "" version: "" internalLoadBalancer: "" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: "" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: "" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: "" sshPublicKey: "" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: "1" 21 taints: 22 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 5 7 15 16 17 20 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster You can obtain the subnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 You can obtain the vnet by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{"\n"}' \ get machineset/<infrastructure_id>-worker-centralus1 2 3 8 9 11 18 19 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID, <infra> node label, and region. 12 Specify the image details for your machine set. If you want to use an Azure Marketplace image, see "Selecting an Azure Marketplace image". 13 Specify an image that is compatible with your instance type. The Hyper-V generation V2 images created by the installation program have a -gen2 suffix, while V1 images have the same name without the suffix. 14 Specify the region to place machines on. 21 Specify the zone within your region to place machines on. Be sure that your region supports the zone that you specify. 22 Specify a taint to prevent user workloads from being scheduled on infra nodes. Machine sets running on Azure support non-guaranteed Spot VMs . You can save on costs by using Spot VMs at a lower price compared to standard VMs on Azure. You can configure Spot VMs by adding spotVMOptions to the MachineSet YAML file. Additional resources Selecting an Azure Marketplace image 7.2.1.3. Sample YAML for a machine set custom resource on GCP This sample YAML defines a machine set that runs in Google Cloud Platform (GCP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 6 - key: node-role.kubernetes.io/infra effect: NoSchedule 1 For <infrastructure_id> , specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 For <infra> , specify the <infra> node label. 3 Specify the path to the image that is used in current compute machine sets. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command: USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a To use a GCP Marketplace image, specify the offer to use: OpenShift Container Platform: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202210040145 OpenShift Platform Plus: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-48-x86-64-202206140145 OpenShift Kubernetes Engine: https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-48-x86-64-202206140145 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 For <project_name> , specify the name of the GCP project that you use for your cluster. 6 Specify a taint to prevent user workloads from being scheduled on infra nodes. Machine sets running on GCP support non-guaranteed preemptible VM instances . You can save on costs by using preemptible VM instances at a lower price compared to normal instances on GCP. You can configure preemptible VM instances by adding preemptible to the MachineSet YAML file. 7.2.1.4. Sample YAML for a machine set custom resource on RHOSP This sample YAML defines a machine set that runs on Red Hat OpenStack Platform (RHOSP) and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone> 1 5 7 14 16 17 18 19 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 8 9 20 Specify the <infra> node label. 4 6 10 Specify the infrastructure ID and <infra> node label. 11 Specify a taint to prevent user workloads from being scheduled on infra nodes. 12 To set a server group policy for the MachineSet, enter the value that is returned from creating a server group . For most deployments, anti-affinity or soft-anti-affinity policies are recommended. 13 Required for deployments to multiple networks. If deploying to multiple networks, this list must include the network that is used as the primarySubnet value. 15 Specify the RHOSP subnet that you want the endpoints of nodes to be published on. Usually, this is the same subnet that is used as the value of machinesSubnet in the install-config.yaml file. 7.2.1.5. Sample YAML for a machine set custom resource on RHV This sample YAML defines a machine set that runs on RHV and creates nodes that are labeled with node-role.kubernetes.io/<node_role>: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <role> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: "" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 os_disk: 22 size_gb: <disk_size> 23 network_interfaces: 24 vnic_profile_id: <vnic_profile_id> 25 credentialsSecret: name: ovirt-credentials 26 kind: OvirtMachineProviderSpec type: <workload_type> 27 auto_pinning_policy: <auto_pinning_policy> 28 hugepages: <hugepages> 29 affinityGroupsNames: - compute 30 userDataSecret: name: worker-user-data 1 7 9 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 3 10 11 13 Specify the node label to add. 4 8 12 Specify the infrastructure ID and node label. These two strings together cannot be longer than 35 characters. 5 Specify the number of machines to create. 6 Selector for the machines. 14 Specify the UUID for the RHV cluster to which this VM instance belongs. 15 Specify the RHV VM template to use to create the machine. 16 Optional: Specify the VM instance type. Warning The instance_type_id field is deprecated and will be removed in a future release. If you include this parameter, you do not need to specify the hardware parameters of the VM including CPU and memory because this parameter overrides all hardware parameters. 17 Optional: The CPU field contains the CPU's configuration, including sockets, cores, and threads. 18 Optional: Specify the number of sockets for a VM. 19 Optional: Specify the number of cores per socket. 20 Optional: Specify the number of threads per core. 21 Optional: Specify the size of a VM's memory in MiB. 22 Optional: Root disk of the node. 23 Optional: Specify the size of the bootable disk in GiB. 24 Optional: List of the network interfaces of the VM. If you include this parameter, OpenShift Container Platform discards all network interfaces from the template and creates new ones. 25 Optional: Specify the vNIC profile ID. 26 Specify the name of the secret that holds the RHV credentials. 27 Optional: Specify the workload type for which the instance is optimized. This value affects the RHV VM parameter. Supported values: desktop , server (default), high_performance . high_performance improves performance on the VM, but there are limitations. For example, you cannot access the VM with a graphical console. For more information, see Configuring High Performance Virtual Machines, Templates, and Pools in the Virtual Machine Management Guide . 28 Optional: AutoPinningPolicy defines the policy that automatically sets CPU and NUMA settings, including pinning to the host for this instance. Supported values: none , resize_and_pin . For more information, see Setting NUMA Nodes in the Virtual Machine Management Guide . 29 Optional: Hugepages is the size in KiB for defining hugepages in a VM. Supported values: 2048 or 1048576 . For more information, see Configuring Huge Pages in the Virtual Machine Management Guide . 30 Optional: A list of affinity group names that should be applied to the VMs. The affinity groups must exist in oVirt. Note Because RHV uses a template when creating a VM, if you do not specify a value for an optional parameter, RHV uses the value for that parameter that is specified in the template. 7.2.1.6. Sample YAML for a machine set custom resource on vSphere This sample YAML defines a machine set that runs on VMware vSphere and creates nodes that are labeled with node-role.kubernetes.io/infra: "" . In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <infra> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: "" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: "<vm_network_name>" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: "" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17 1 3 5 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 4 8 Specify the infrastructure ID and <infra> node label. 6 7 9 Specify the <infra> node label. 10 Specify a taint to prevent user workloads from being scheduled on infra nodes. 11 Specify the vSphere VM network to deploy the machine set to. This VM network must be where other compute machines reside in the cluster. 12 Specify the vSphere VM template to use, such as user-5ddjd-rhcos . 13 Specify the vCenter Datacenter to deploy the machine set on. 14 Specify the vCenter Datastore to deploy the machine set on. 15 Specify the path to the vSphere VM folder in vCenter, such as /dc1/vm/user-inst-5ddjd . 16 Specify the vSphere resource pool for your VMs. 17 Specify the vCenter server IP or fully qualified domain name. 7.2.2. Creating a machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new machine set is available, the DESIRED and CURRENT values match. If the machine set is not available, wait a few minutes and run the command again. 7.2.3. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster ... spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1 ... 1 This example node selector deploys pods on nodes in the us-east-1 region by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets 7.2.4. Creating a machine config pool for infrastructure machines If you need infrastructure machines to have dedicated configurations, you must create an infra pool. Procedure Add a label to the node you want to assign as the infra node with a specific label: USD oc label node <node_name> <label> USD oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra= Create a machine config pool that contains both the worker role and your custom role as machine config selector: USD cat infra.mcp.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" 2 1 Add the worker role and your custom role. 2 Add the label you added to the node as a nodeSelector . Note Custom machine config pools inherit machine configs from the worker pool. Custom pools use any machine config targeted for the worker pool, but add the ability to also deploy changes that are targeted at only the custom pool. Because a custom pool inherits resources from the worker pool, any change to the worker pool also affects the custom pool. After you have the YAML file, you can create the machine config pool: USD oc create -f infra.mcp.yaml Check the machine configs to ensure that the infrastructure configuration rendered successfully: USD oc get machineconfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d You should see a new machine config, with the rendered-infra-* prefix. Optional: To deploy changes to a custom pool, create a machine config that uses the custom pool name as the label, such as infra . Note that this is not required and only shown for instructional purposes. In this manner, you can apply any custom configurations specific to only your infra nodes. Note After you create the new machine config pool, the MCO generates a new rendered config for that pool, and associated nodes of that pool reboot to apply the new configuration. Create a machine config: USD cat infra.mc.yaml Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra 1 Add the label you added to the node as a nodeSelector . Apply the machine config to the infra-labeled nodes: USD oc create -f infra.mc.yaml Confirm that your new machine config pool is available: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m In this example, a worker node was changed to an infra node. Additional resources See Node configuration management with machine config pools for more information on grouping infra machines in a custom pool. 7.3. Assigning machine set resources to infrastructure nodes After creating an infrastructure machine set, the worker and infra roles are applied to new infra nodes. Nodes with the infra role applied are not counted toward the total number of subscriptions that are required to run the environment, even when the worker role is also applied. However, with an infra node being assigned as a worker, there is a chance user workloads could get inadvertently assigned to an infra node. To avoid this, you can apply a taint to the infra node and tolerations for the pods you want to control. 7.3.1. Binding infrastructure node workloads using taints and tolerations If you have an infra node that has the infra and worker roles assigned, you must configure the node so that user workloads are not assigned to it. Important It is recommended that you preserve the dual infra,worker label that is created for infra nodes and use taints and tolerations to manage nodes that user workloads are scheduled on. If you remove the worker label from the node, you must create a custom pool to manage it. A node with a label other than master or worker is not recognized by the MCO without a custom pool. Maintaining the worker label allows the node to be managed by the default worker machine config pool, if no custom pools that select the custom label exists. The infra label communicates to the cluster that it does not count toward the total number of subscriptions. Prerequisites Configure additional MachineSet objects in your OpenShift Container Platform cluster. Procedure Add a taint to the infra node to prevent scheduling user workloads on it: Determine if the node has the taint: USD oc describe nodes <node_name> Sample output oc describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker ... Taints: node-role.kubernetes.io/infra:NoSchedule ... This example shows that the node has a taint. You can proceed with adding a toleration to your pod in the step. If you have not configured a taint to prevent scheduling user workloads on it: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: <node_name> labels: ... spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved ... This example places a taint on node1 that has key node-role.kubernetes.io/infra and taint effect NoSchedule . Nodes with the NoSchedule effect schedule only pods that tolerate the taint, but allow existing pods to remain scheduled on the node. Note If a descheduler is used, pods violating node taints could be evicted from the cluster. Add tolerations for the pod configurations you want to schedule on the infra node, like router, registry, and monitoring workloads. Add the following code to the Pod object specification: tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 value: reserved 4 1 Specify the effect that you added to the node. 2 Specify the key that you added to the node. 3 Specify the Exists Operator to require a taint with the key node-role.kubernetes.io/infra to be present on the node. 4 Specify the value of the key-value pair taint that you added to the node. This toleration matches the taint created by the oc adm taint command. A pod with this toleration can be scheduled onto the infra node. Note Moving pods for an Operator installed via OLM to an infra node is not always possible. The capability to move Operator pods depends on the configuration of each Operator. Schedule the pod to the infra node using a scheduler. See the documentation for Controlling pod placement onto nodes for details. Additional resources See Controlling pod placement using the scheduler for general information on scheduling a pod to a node. See Moving resources to infrastructure machine sets for instructions on scheduling pods to infra nodes. 7.4. Moving resources to infrastructure machine sets Some of the infrastructure resources are deployed in your cluster by default. You can move them to the infrastructure machine sets that you created by adding the infrastructure node selector, as shown: spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Applying a specific node selector to all infrastructure components causes OpenShift Container Platform to schedule those workloads on nodes with that label . 7.4.1. Moving the router You can deploy the router pod to a different machine set. By default, the pod is deployed to a worker node. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the IngressController custom resource for the router Operator: USD oc get ingresscontroller default -n openshift-ingress-operator -o yaml The command output resembles the following text: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: "11341" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: "True" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default Edit the ingresscontroller resource and change the nodeSelector to use the infra label: USD oc edit ingresscontroller default -n openshift-ingress-operator spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrastructure node, also add a matching toleration. Confirm that the router pod is running on the infra node. View the list of router pods and note the node name of the running pod: USD oc get pod -n openshift-ingress -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none> In this example, the running pod is on the ip-10-0-217-226.ec2.internal node. View the node status of the running pod: USD oc get node <node_name> 1 1 Specify the <node_name> that you obtained from the pod list. Example output NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.22.1 Because the role list includes infra , the pod is running on the correct node. 7.4.2. Moving the default registry You configure the registry Operator to deploy its pods to different nodes. Prerequisites Configure additional machine sets in your OpenShift Container Platform cluster. Procedure View the config/instance object: USD oc get configs.imageregistry.operator.openshift.io/cluster -o yaml Example output apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: "56174" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status: ... Edit the config/instance object: USD oc edit configs.imageregistry.operator.openshift.io/cluster spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verify the registry pod has been moved to the infrastructure node. Run the following command to identify the node where the registry pod is located: USD oc get pods -o wide -n openshift-image-registry Confirm the node has the label you specified: USD oc describe node <node_name> Review the command output and confirm that node-role.kubernetes.io/infra is in the LABELS list. 7.4.3. Moving the monitoring solution The monitoring stack includes multiple components, including Prometheus, Grafana, and Alertmanager. The Cluster Monitoring Operator manages this stack. To redeploy the monitoring stack to infrastructure nodes, you can create and apply a custom config map. Procedure Edit the cluster-monitoring-config config map and change the nodeSelector to use the infra label: USD oc edit configmap cluster-monitoring-config -n openshift-monitoring apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute 1 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Watch the monitoring pods move to the new machines: USD watch 'oc get pod -n openshift-monitoring -o wide' If a component has not moved to the infra node, delete the pod with this component: USD oc delete pod -n openshift-monitoring <pod> The component from the deleted pod is re-created on the infra node. 7.4.4. Moving OpenShift Logging resources You can configure the Cluster Logging Operator to deploy the pods for logging subsystem components, such as Elasticsearch and Kibana, to different nodes. You cannot move the Cluster Logging Operator pod from its installed location. For example, you can move the Elasticsearch pods to a separate node because of high CPU, memory, and disk requirements. Prerequisites The Red Hat OpenShift Logging and Elasticsearch Operators must be installed. These features are not installed by default. Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc edit ClusterLogging instance apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana ... 1 2 Add a nodeSelector parameter with the appropriate value to the component you want to move. You can use a nodeSelector in the format shown or use <key>: <value> pairs, based on the value specified for the node. If you added a taint to the infrasructure node, also add a matching toleration. Verification To verify that a component has moved, you can use the oc get pod -o wide command. For example: You want to move the Kibana pod from the ip-10-0-147-79.us-east-2.compute.internal node: USD oc get pod kibana-5b8bdf44f9-ccpq9 -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none> You want to move the Kibana pod to the ip-10-0-139-48.us-east-2.compute.internal node, a dedicated infrastructure node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.22.1 Note that the node has a node-role.kubernetes.io/infra: '' label: USD oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml Example output kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: '' ... To move the Kibana pod, edit the ClusterLogging CR to add a node selector: apiVersion: logging.openshift.io/v1 kind: ClusterLogging ... spec: ... visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana 1 Add a node selector to match the label in the node specification. After you save the CR, the current Kibana pod is terminated and new pod is deployed: USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s The new pod is on the ip-10-0-139-48.us-east-2.compute.internal node: USD oc get pod kibana-7d85dcffc8-bfpfp -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none> After a few moments, the original Kibana pod is removed. USD oc get pods Example output NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s Additional resources See the monitoring documentation for the general instructions on moving OpenShift Container Platform components. | [
"oc get infrastructure cluster -o jsonpath='{.status.platform}'",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<zone> 8 spec: metadata: labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: ami: id: ami-046fe691f52a953f9 11 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 12 instanceType: m4.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 13 region: <region> 14 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-worker-sg 15 subnet: filters: - name: tag:Name values: - <infrastructure_id>-private-<zone> 16 tags: - name: kubernetes.io/cluster/<infrastructure_id> 17 value: owned userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-worker-<zone>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra-<region> 4 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 6 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra-<region> 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 11 providerSpec: value: apiVersion: azureproviderconfig.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: 12 offer: \"\" publisher: \"\" resourceID: /resourceGroups/<infrastructure_id>-rg/providers/Microsoft.Compute/images/<infrastructure_id> 13 sku: \"\" version: \"\" internalLoadBalancer: \"\" kind: AzureMachineProviderSpec location: <region> 14 managedIdentity: <infrastructure_id>-identity 15 metadata: creationTimestamp: null natRule: null networkResourceGroup: \"\" osDisk: diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: \"\" resourceGroup: <infrastructure_id>-rg 16 sshPrivateKey: \"\" sshPublicKey: \"\" subnet: <infrastructure_id>-<role>-subnet 17 18 userDataSecret: name: worker-user-data 19 vmSize: Standard_DS4_v2 vnet: <infrastructure_id>-vnet 20 zone: \"1\" 21 taints: 22 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.subnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.vnet}{\"\\n\"}' get machineset/<infrastructure_id>-worker-centralus1",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/infra: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 5 region: us-central1 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a taints: 6 - key: node-role.kubernetes.io/infra effect: NoSchedule",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{\"\\n\"}' get machineset/<infrastructure_id>-worker-a",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <infra> 2 machine.openshift.io/cluster-api-machine-type: <infra> 3 name: <infrastructure_id>-infra 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 6 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machine-role: <infra> 8 machine.openshift.io/cluster-api-machine-type: <infra> 9 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 10 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" taints: 11 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: openstackproviderconfig.openshift.io/v1alpha1 cloudName: openstack cloudsSecret: name: openstack-cloud-credentials namespace: openshift-machine-api flavor: <nova_flavor> image: <glance_image_name_or_location> serverGroupID: <optional_UUID_of_server_group> 12 kind: OpenstackProviderSpec networks: 13 - filter: {} subnets: - filter: name: <subnet_name> tags: openshiftClusterID=<infrastructure_id> 14 primarySubnet: <rhosp_subnet_UUID> 15 securityGroups: - filter: {} name: <infrastructure_id>-worker 16 serverMetadata: Name: <infrastructure_id>-worker 17 openshiftClusterID: <infrastructure_id> 18 tags: - openshiftClusterID=<infrastructure_id> 19 trunk: true userDataSecret: name: worker-user-data 20 availabilityZone: <optional_openstack_availability_zone>",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 name: <infrastructure_id>-<role> 4 namespace: openshift-machine-api spec: replicas: <number_of_replicas> 5 selector: 6 matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 8 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 9 machine.openshift.io/cluster-api-machine-role: <role> 10 machine.openshift.io/cluster-api-machine-type: <role> 11 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> 12 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" 13 providerSpec: value: apiVersion: ovirtproviderconfig.machine.openshift.io/v1beta1 cluster_id: <ovirt_cluster_id> 14 template_name: <ovirt_template_name> 15 instance_type_id: <instance_type_id> 16 cpu: 17 sockets: <number_of_sockets> 18 cores: <number_of_cores> 19 threads: <number_of_threads> 20 memory_mb: <memory_size> 21 os_disk: 22 size_gb: <disk_size> 23 network_interfaces: 24 vnic_profile_id: <vnic_profile_id> 25 credentialsSecret: name: ovirt-credentials 26 kind: OvirtMachineProviderSpec type: <workload_type> 27 auto_pinning_policy: <auto_pinning_policy> 28 hugepages: <hugepages> 29 affinityGroupsNames: - compute 30 userDataSecret: name: worker-user-data",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-infra 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 4 template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 5 machine.openshift.io/cluster-api-machine-role: <infra> 6 machine.openshift.io/cluster-api-machine-type: <infra> 7 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-infra 8 spec: metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" 9 taints: 10 - key: node-role.kubernetes.io/infra effect: NoSchedule providerSpec: value: apiVersion: vsphereprovider.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - networkName: \"<vm_network_name>\" 11 numCPUs: 4 numCoresPerSocket: 1 snapshot: \"\" template: <vm_template_name> 12 userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_datacenter_name> 13 datastore: <vcenter_datastore_name> 14 folder: <vcenter_vm_folder_path> 15 resourcepool: <vsphere_resource_pool> 16 server: <vcenter_server_ip> 17",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get machinesets -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc get machineset <machineset_name> -n openshift-machine-api -o yaml",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3",
"oc create -f <file_name>.yaml",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m",
"oc label node <node-name> node-role.kubernetes.io/app=\"\"",
"oc label node <node-name> node-role.kubernetes.io/infra=\"\"",
"oc get nodes",
"oc edit scheduler cluster",
"apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: topology.kubernetes.io/region=us-east-1 1",
"oc label node <node_name> <label>",
"oc label node ci-ln-n8mqwr2-f76d1-xscn2-worker-c-6fmtx node-role.kubernetes.io/infra=",
"cat infra.mcp.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" 2",
"oc create -f infra.mcp.yaml",
"oc get machineconfig",
"NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED 00-master 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 00-worker 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-master-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-container-runtime 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 01-worker-kubelet 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-1ae2a1e0-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-master-ssh 3.2.0 31d 99-worker-1ae64748-a115-11e9-8f14-005056899d54-registries 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 31d 99-worker-ssh 3.2.0 31d rendered-infra-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 23m rendered-master-072d4b2da7f88162636902b074e9e28e 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-master-3e88ec72aed3886dec061df60d16d1af 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-master-419bee7de96134963a15fdf9dd473b25 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-master-53f5c91c7661708adce18739cc0f40fb 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d rendered-master-a6a357ec18e5bce7f5ac426fc7c5ffcd 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-master-dc7f874ec77fc4b969674204332da037 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-1a75960c52ad18ff5dfa6674eb7e533d 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-2640531be11ba43c61d72e82dc634ce6 5b6fb8349a29735e48446d435962dec4547d3090 3.2.0 31d rendered-worker-4e48906dca84ee702959c71a53ee80e7 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 7d3h rendered-worker-4f110718fe88e5f349987854a1147755 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 17d rendered-worker-afc758e194d6188677eb837842d3b379 02c07496ba0417b3e12b78fb32baf6293d314f79 3.2.0 31d rendered-worker-daa08cc1e8f5fcdeba24de60cd955cc3 365c1cfd14de5b0e3b85e0fc815b0060f36ab955 3.2.0 13d",
"cat infra.mc.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 51-infra labels: machineconfiguration.openshift.io/role: infra 1 spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/infratest mode: 0644 contents: source: data:,infra",
"oc create -f infra.mc.yaml",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE infra rendered-infra-60e35c2e99f42d976e084fa94da4d0fc True False False 1 1 1 0 4m20s master rendered-master-9360fdb895d4c131c7c4bebbae099c90 True False False 3 3 3 0 91m worker rendered-worker-60e35c2e99f42d976e084fa94da4d0fc True False False 2 2 2 0 91m",
"oc describe nodes <node_name>",
"describe node ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Name: ci-ln-iyhx092-f76d1-nvdfm-worker-b-wln2l Roles: worker Taints: node-role.kubernetes.io/infra:NoSchedule",
"oc adm taint nodes <node_name> <key>=<value>:<effect>",
"oc adm taint nodes node1 node-role.kubernetes.io/infra=reserved:NoExecute",
"kind: Node apiVersion: v1 metadata: name: <node_name> labels: spec: taints: - key: node-role.kubernetes.io/infra effect: NoExecute value: reserved",
"tolerations: - effect: NoExecute 1 key: node-role.kubernetes.io/infra 2 operator: Exists 3 value: reserved 4",
"spec: nodePlacement: 1 nodeSelector: matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get ingresscontroller default -n openshift-ingress-operator -o yaml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: 2019-04-18T12:35:39Z finalizers: - ingresscontroller.operator.openshift.io/finalizer-ingresscontroller generation: 1 name: default namespace: openshift-ingress-operator resourceVersion: \"11341\" selfLink: /apis/operator.openshift.io/v1/namespaces/openshift-ingress-operator/ingresscontrollers/default uid: 79509e05-61d6-11e9-bc55-02ce4781844a spec: {} status: availableReplicas: 2 conditions: - lastTransitionTime: 2019-04-18T12:36:15Z status: \"True\" type: Available domain: apps.<cluster>.example.com endpointPublishingStrategy: type: LoadBalancerService selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller=default",
"oc edit ingresscontroller default -n openshift-ingress-operator",
"spec: nodePlacement: nodeSelector: 1 matchLabels: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pod -n openshift-ingress -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86798b4b5d-bdlvd 1/1 Running 0 28s 10.130.2.4 ip-10-0-217-226.ec2.internal <none> <none> router-default-955d875f4-255g8 0/1 Terminating 0 19h 10.129.2.4 ip-10-0-148-172.ec2.internal <none> <none>",
"oc get node <node_name> 1",
"NAME STATUS ROLES AGE VERSION ip-10-0-217-226.ec2.internal Ready infra,worker 17h v1.22.1",
"oc get configs.imageregistry.operator.openshift.io/cluster -o yaml",
"apiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: creationTimestamp: 2019-02-05T13:52:05Z finalizers: - imageregistry.operator.openshift.io/finalizer generation: 1 name: cluster resourceVersion: \"56174\" selfLink: /apis/imageregistry.operator.openshift.io/v1/configs/cluster uid: 36fd3724-294d-11e9-a524-12ffeee2931b spec: httpSecret: d9a012ccd117b1e6616ceccb2c3bb66a5fed1b5e481623 logging: 2 managementState: Managed proxy: {} replicas: 1 requests: read: {} write: {} storage: s3: bucket: image-registry-us-east-1-c92e88cad85b48ec8b312344dff03c82-392c region: us-east-1 status:",
"oc edit configs.imageregistry.operator.openshift.io/cluster",
"spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: namespaces: - openshift-image-registry topologyKey: kubernetes.io/hostname weight: 100 logLevel: Normal managementState: Managed nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc get pods -o wide -n openshift-image-registry",
"oc describe node <node_name>",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |+ alertmanagerMain: nodeSelector: 1 node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusK8s: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute prometheusOperator: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute grafana: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute k8sPrometheusAdapter: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute kubeStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute telemeterClient: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute openshiftStateMetrics: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute thanosQuerier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - key: node-role.kubernetes.io/infra value: reserved effect: NoSchedule - key: node-role.kubernetes.io/infra value: reserved effect: NoExecute",
"watch 'oc get pod -n openshift-monitoring -o wide'",
"oc delete pod -n openshift-monitoring <pod>",
"oc edit ClusterLogging instance",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: collection: logs: fluentd: resources: null type: fluentd logStore: elasticsearch: nodeCount: 3 nodeSelector: 1 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved redundancyPolicy: SingleRedundancy resources: limits: cpu: 500m memory: 16Gi requests: cpu: 500m memory: 16Gi storage: {} type: elasticsearch managementState: Managed visualization: kibana: nodeSelector: 2 node-role.kubernetes.io/infra: '' tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pod kibana-5b8bdf44f9-ccpq9 -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-5b8bdf44f9-ccpq9 2/2 Running 0 27s 10.129.2.18 ip-10-0-147-79.us-east-2.compute.internal <none> <none>",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-133-216.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-146.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-192.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-139-241.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-147-79.us-east-2.compute.internal Ready worker 51m v1.22.1 ip-10-0-152-241.us-east-2.compute.internal Ready master 60m v1.22.1 ip-10-0-139-48.us-east-2.compute.internal Ready infra 51m v1.22.1",
"oc get node ip-10-0-139-48.us-east-2.compute.internal -o yaml",
"kind: Node apiVersion: v1 metadata: name: ip-10-0-139-48.us-east-2.compute.internal selfLink: /api/v1/nodes/ip-10-0-139-48.us-east-2.compute.internal uid: 62038aa9-661f-41d7-ba93-b5f1b6ef8751 resourceVersion: '39083' creationTimestamp: '2020-04-13T19:07:55Z' labels: node-role.kubernetes.io/infra: ''",
"apiVersion: logging.openshift.io/v1 kind: ClusterLogging spec: visualization: kibana: nodeSelector: 1 node-role.kubernetes.io/infra: '' proxy: resources: null replicas: 1 resources: null type: kibana",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 29m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 28m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 28m fluentd-42dzz 1/1 Running 0 28m fluentd-d74rq 1/1 Running 0 28m fluentd-m5vr9 1/1 Running 0 28m fluentd-nkxl7 1/1 Running 0 28m fluentd-pdvqb 1/1 Running 0 28m fluentd-tflh6 1/1 Running 0 28m kibana-5b8bdf44f9-ccpq9 2/2 Terminating 0 4m11s kibana-7d85dcffc8-bfpfp 2/2 Running 0 33s",
"oc get pod kibana-7d85dcffc8-bfpfp -o wide",
"NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kibana-7d85dcffc8-bfpfp 2/2 Running 0 43s 10.131.0.22 ip-10-0-139-48.us-east-2.compute.internal <none> <none>",
"oc get pods",
"NAME READY STATUS RESTARTS AGE cluster-logging-operator-84d98649c4-zb9g7 1/1 Running 0 30m elasticsearch-cdm-hwv01pf7-1-56588f554f-kpmlg 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-2-84c877d75d-75wqj 2/2 Running 0 29m elasticsearch-cdm-hwv01pf7-3-f5d95b87b-4nx78 2/2 Running 0 29m fluentd-42dzz 1/1 Running 0 29m fluentd-d74rq 1/1 Running 0 29m fluentd-m5vr9 1/1 Running 0 29m fluentd-nkxl7 1/1 Running 0 29m fluentd-pdvqb 1/1 Running 0 29m fluentd-tflh6 1/1 Running 0 29m kibana-7d85dcffc8-bfpfp 2/2 Running 0 62s"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/machine_management/creating-infrastructure-machinesets |
Chapter 5. Preparing an API service for 3scale discovery | Chapter 5. Preparing an API service for 3scale discovery Red Hat 3scale API Management is an offering from Red Hat that enables you to regulate access to API services on the public Internet. The functionality of 3scale includes the ability to enforce service-level agreements (SLAs), manage API versions, provide security and authentication services and more. Fuse supports a service discovery feature for 3scale, which makes it easy to discover Fuse services from the 3scale Admin Portal UI. Using service discovery, you can scan for Fuse applications running in the same OpenShift cluster and automatically import the associated API definitions into 3scale. Prerequisites A Fuse application that provides an API service is deployed and running in OpenShift. The Fuse application is annotated with the requisite annotations to make it discoverable by 3scale. Note Fuse projects that are generated by API Designer are pre-configured to automatically provide the requisite annotations. For Fuse projects that are not generated by API Designer, you must configure your project as described in Adding annotations for Fuse projects that are not generated by API Designer . The 3scale API Management system is deployed on the same OpenShift cluster as the API service that is to be discovered. For details of the procedure to discover an API service in 3scale, see the service discovery section of the Red Hat 3scale API Management Admin Portal Guide . Additional resources Red Hat 3scale API Management product page Red Hat 3scale API Management documentation 5.1. Adding annotations for Fuse projects that are not generated by API Designer In order for 3scale to discover an API service, the Fuse application that provides the API service must include Kubernetes Service Annotations that make it discoverable. These annotations are provided by the Service Discovery Enricher which is part of the OpenShift Maven Plugin. For Apache Camel Rest DSL projects, the OpenShift Maven Plugin runs the Service Discovery Enricher by default. Fuse projects that are generated by API Designer are pre-configured to automatically provide the required annotations. Procedure For a Fuse Rest DSL project that is not generated by API Designer, configure the project as follows: Edit the Fuse project's pom.xml file to include the openshift-maven-plugin dependency, as shown in this example: <plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>USD{fuse.version}</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> </plugin> The OpenShift Maven Plugin runs the Service Discovery Enricher if certain project-level conditions are met (for example, the project must be a Camel Rest DSL project). You do not need to specify the Service Discovery Enricher as a dependency in the pom.xml file, unless you want to customize the enricher's behavior (as described in Customizing the API service annotation values .) In the Fuse Rest DSL project's camel-context.xml file, specify the following attributes in the restConfiguration element: scheme : The scheme part of the URL where the service is hosted. You can specify "http" or "https". contextPath : The path part of the URL where the API service is hosted. apiContextPath : The path to the location where the API service description document is hosted. You can specify either a relative path if the document is self-hosted or a full URL if the document is hosted externally. The following excerpt from an example camel-context.xml file shows annotation attribute values in the restConfiguration element: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <camelContext xmlns="http://camel.apache.org/schema/spring"> <restConfiguration component="servlet" scheme="https" contextPath="myapi" apiContextPath="myapi/openapi.json"/> ... The enricher uses the information provided by these restConfiguration element attribute values to create values for the discovery.3scale.net/scheme , discovery.3scale.net/path , and the discovery.3scale.net/description-path annotations, thereby making the project's deployed OpenShift service discoverable by 3scale as described in the see the service discovery section of the Red Hat 3scale API Management Admin Portal Guide . The enricher adds the following label and annotations to make the service discoverable by 3scale: The discovery.3scale.net label: By default, the enricher sets this value to "true". 3scale uses this label when it executes the selector definition to find all services that need discovery. The following annotations: discovery.3scale.net/discovery-version : (optional) The version of the 3scale discovery process. The enricher sets this value to "v1" by default. discovery.3scale.net/scheme : The scheme part of the URL where the service is hosted. The enricher uses the default "http" unless you override it in the restConfiguration element's scheme attribute. The other possible value is "https". discovery.3scale.net/path : The path part of the URL where the service is hosted. This annotation is omitted when the path is at root, "/". The enricher gets this value from the restConfiguration element's path attribute. discovery.3scale.net/port : The port of the service. The enricher obtains this value from the Kubernetes service definition, which contains the the port numbers of the services it exposes. If the Kubernetes service definition exposes more than one service, the enricher uses the first port listed. discovery.3scale.net/description-path : (optional) The path to the OpenAPI service description document. The enricher gets this value from the restConfiguration element's contextPath attribute. You can customize the behavior of the Service Discovery Enricher, as described in Customizing the API service annotation values . 5.2. Customizing the API service annotation values The Maven Fabric8 Plugin runs the Fabric8 Service Discovery Enricher by default. The enricher adds annotations to the Fuse Rest DSL project's API service so that the API service is discoverable by 3scale, as described in Using Service Discovery in the Red Hat 3scale API Management Admin Portal Guide guide. The enricher uses default values for some annotations and obtains values for other annotations from the project's camel-context.xml file. You can override the default values and the values defined in the camel-context.xml file by defining values in the Fuse project pom.xml file or in a service.yml file. (If you define values in both files, the enricher uses the values from the service.yml file.) See Fabric8 Service Discovery Enricher elements for a description of the elements that you can specify for the Fabric8 Service Discovery Enricher. Procedure To specify annotation values in the Fuse project pom.xml file: Open your Fuse project's pom.xml file in an editor of your choice. Locate the openshift-maven-plugin dependency, as shown in this example: <plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>USD{fuse.version}</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> </plugin> Add the Fabric8 Service Discovery Enricher as a dependency to the openshift-maven plugin as shown in the following example. <plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>USD{fuse.version}</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>io.acme</groupId> <artifactId>myenricher</artifactId> <version>1.0</version> <configuration> <enricher> <config> <f8-service-discovery> <scheme>https</scheme> <path>/api</path> <descriptionPath>/api/openapi.json</descriptionPath> </f8-service-discovery> </config> </enricher> </configuration> </dependency> </dependencies> </plugin> Save your changes. Alternatively, you can use a src/main/fabric8/service.yml fragment to override the annotation values, as shown in the following example: 5.3. Fabric8 Service Discovery Enricher elements The following table describes the elements that you can specify for the Fabric8 Service Discovery Enricher, if you want to override the default values and the values defined in the camel-context.xml file. You can define these values in the Fuse Rest DSL project's pom.xml file or in a src/main/fabric8/service.yml file. (If you define values in both files, the enricher uses the values from the service.yml file.) See Customizing the API service annotation values for examples. Table 5.1. Fabric8 Service Discovery Enricher elements Element Description Default springDir The path to the spring configuration directory that contains the camel-context.xml file. The /src/main/resources/spring path which is used to recognize a Camel Rest DSL project. scheme The scheme part of the URL where the service is hosted. You can specify "http" or "https". http path The path part of the URL where the API service is hosted. port The port part of the URL where the API service is hosted. 80 descriptionPath The path to a location where the API service description document is hosted. You can specify either a relative path if the document is self-hosted or a full URL if the document is hosted externally. discoveryVersion The version of the 3scale discovery implementation. v1 discoverable The element that sets the discovery.3scale.net label to either true or false . If set to true , 3scale will try to discover this service. If set to false , 3scale will not try to discover this service. You can use this element as a switch, to temporary turn off 3scale discovery integration by setting it to "false". If you do not specify a value, the enricher tries to auto-detect whether it can make the service discoverable. If the enricher determines that it cannot make the service discoverable, 3scale will not try to discover this service. | [
"<plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>USD{fuse.version}</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> </plugin>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <restConfiguration component=\"servlet\" scheme=\"https\" contextPath=\"myapi\" apiContextPath=\"myapi/openapi.json\"/>",
"<plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>USD{fuse.version}</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> </plugin>",
"<plugin> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>USD{fuse.version}</version> <executions> <execution> <goals> <goal>resource</goal> <goal>build</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>io.acme</groupId> <artifactId>myenricher</artifactId> <version>1.0</version> <configuration> <enricher> <config> <f8-service-discovery> <scheme>https</scheme> <path>/api</path> <descriptionPath>/api/openapi.json</descriptionPath> </f8-service-discovery> </config> </enricher> </configuration> </dependency> </dependencies> </plugin>",
"kind: Service name: metadata: labels: discovery.3scale.net/discoverable : \"true\" annotations: discovery.3scale.net/discovery-version : \"v1\" discovery.3scale.net/scheme : \"https\" discovery.3scale.net/path : \"/api\" discovery.3scale.net/port : \"443\" discovery.3scale.net/description-path : \"/api/openapi.json\" spec: type: LoadBalancer"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/designing_apis/threescale-discover-service |
Chapter 6. ControllerRevision [apps/v1] | Chapter 6. ControllerRevision [apps/v1] Description ControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers. Type object Required revision 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data RawExtension Data is the serialized representation of the state. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata revision integer Revision indicates the revision of the state represented by Data. 6.2. API endpoints The following API endpoints are available: /apis/apps/v1/controllerrevisions GET : list or watch objects of kind ControllerRevision /apis/apps/v1/watch/controllerrevisions GET : watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/controllerrevisions DELETE : delete collection of ControllerRevision GET : list or watch objects of kind ControllerRevision POST : create a ControllerRevision /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions GET : watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} DELETE : delete a ControllerRevision GET : read the specified ControllerRevision PATCH : partially update the specified ControllerRevision PUT : replace the specified ControllerRevision /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions/{name} GET : watch changes to an object of kind ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/apps/v1/controllerrevisions Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind ControllerRevision Table 6.2. HTTP responses HTTP code Reponse body 200 - OK ControllerRevisionList schema 401 - Unauthorized Empty 6.2.2. /apis/apps/v1/watch/controllerrevisions Table 6.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. Table 6.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/apps/v1/namespaces/{namespace}/controllerrevisions Table 6.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ControllerRevision Table 6.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 6.8. Body parameters Parameter Type Description body DeleteOptions schema Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ControllerRevision Table 6.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK ControllerRevisionList schema 401 - Unauthorized Empty HTTP method POST Description create a ControllerRevision Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. Body parameters Parameter Type Description body ControllerRevision schema Table 6.14. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 202 - Accepted ControllerRevision schema 401 - Unauthorized Empty 6.2.4. /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions Table 6.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. Table 6.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.5. /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} Table 6.18. Global path parameters Parameter Type Description name string name of the ControllerRevision namespace string object name and auth scope, such as for teams and projects Table 6.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ControllerRevision Table 6.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.21. Body parameters Parameter Type Description body DeleteOptions schema Table 6.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ControllerRevision Table 6.23. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ControllerRevision Table 6.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.25. Body parameters Parameter Type Description body Patch schema Table 6.26. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ControllerRevision Table 6.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.28. Body parameters Parameter Type Description body ControllerRevision schema Table 6.29. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 401 - Unauthorized Empty 6.2.6. /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions/{name} Table 6.30. Global path parameters Parameter Type Description name string name of the ControllerRevision namespace string object name and auth scope, such as for teams and projects Table 6.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/metadata_apis/controllerrevision-apps-v1 |
Chapter 4. Installing a cluster on IBM Cloud with customizations | Chapter 4. Installing a cluster on IBM Cloud with customizations In OpenShift Container Platform version 4.14, you can install a customized cluster on infrastructure that the installation program provisions on IBM Cloud(R). To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring IAM for IBM Cloud(R) . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.5. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IC_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 4.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on IBM Cloud(R). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select ibmcloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for IBM Cloud(R) 4.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 4.1. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 4.6.2. Tested instance types for IBM Cloud The following IBM Cloud(R) instance types have been tested with OpenShift Container Platform. Example 4.1. Machine series bx2-8x32 bx2d-4x16 bx3d-4x20 cx2-8x16 cx2d-4x8 cx3d-8x20 gx2-8x64x1v100 gx3-16x80x1l4 mx2-8x64 mx2d-4x32 mx3d-2x20 ox2-4x32 ox2-8x64 ux2d-2x56 vx2d-4x56 4.6.3. Sample customized install-config.yaml file for IBM Cloud You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and then modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 10 credentialsMode: Manual publish: External pullSecret: '{"auths": ...}' 11 fips: false 12 sshKey: ssh-ed25519 AAAA... 13 1 8 10 11 Required. The installation program prompts you for this value. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 7 Enables or disables simultaneous multithreading, also known as Hyper-Threading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 9 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 Enables or disables FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 13 Optional: provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 4.6.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.7. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 4.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.9. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 4.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 4.12. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"export IC_API_KEY=<api_key>",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 3 hyperthreading: Enabled 4 name: master platform: ibmcloud: {} replicas: 3 compute: 5 6 - hyperthreading: Enabled 7 name: worker platform: ibmcloud: {} replicas: 3 metadata: name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: ibmcloud: region: us-south 10 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 11 fips: false 12 sshKey: ssh-ed25519 AAAA... 13",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"./openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer",
"ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4",
"grep resourceGroupName <installation_directory>/manifests/cluster-infrastructure-02-config.yml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_cloud/installing-ibm-cloud-customizations |
16.18. Write Changes to Disk | 16.18. Write Changes to Disk The installer prompts you to confirm the partitioning options that you selected. Click Write changes to disk to allow the installer to partition your hard drive and install Red Hat Enterprise Linux. Figure 16.46. Writing storage configuration to disk If you are certain that you want to proceed, click Write changes to disk . Warning Up to this point in the installation process, the installer has made no lasting changes to your computer. When you click Write changes to disk , the installer will allocate space on your hard drive and start to transfer Red Hat Enterprise Linux into this space. Depending on the partitioning option that you chose, this process might include erasing data that already exists on your computer. To revise any of the choices that you made up to this point, click Go back . To cancel installation completely, switch off your computer. After you click Write changes to disk , allow the installation process to complete. If the process is interrupted (for example, by you switching off or resetting the computer, or by a power outage) you will probably not be able to use your computer until you restart and complete the Red Hat Enterprise Linux installation process, or install a different operating system. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/write_changes_to_disk-ppc |
Chapter 13. Using a vault to obtain secrets | Chapter 13. Using a vault to obtain secrets To obtain a secret from a vault rather than entering it directly, enter the following specially crafted string into the appropriate field: where the key is the name of the secret recognized by the vault. To prevent secrets from leaking across realms, Red Hat Single Sign-On combines the realm name with the key obtained from the vault expression. This method means that the key does not directly map to an entry in the vault but creates the final entry name according to the algorithm used to combine the key with the realm name. You can obtain the secret from the vault in the following fields: SMTP password In the realm SMTP settings LDAP bind credential In the LDAP settings of LDAP-based user federation. OIDC identity provider secret In the Client Secret inside identity provider OpenID Connect Config To use a vault, register a vault provider in Red Hat Single Sign-On. You can use the providers described here or implement your provider. See the Server Developer Guide for more information. Note Red Hat Single Sign-On permits a maximum of one active vault provider per Red Hat Single Sign-On instance at a time. Configure the vault provider in each instance within the cluster consistently. 13.1. Kubernetes / OpenShift files plain-text vault provider Red Hat Single Sign-On supports vault implementation for Kubernetes secrets . You can mount Kubernetes secrets as data volumes, and they appear as a directory with a flat-file structure. Red Hat Single Sign-On represents each secret as a file with the file's name as the secret name and the file's contents as the secret value. You must name the files within this directory as the secret name prefixed by the realm name and an underscore. Double all underscores within the secret name or the realm name in the file name. For example, for a field within a realm named sso_realm , a reference to a secret with the name secret-name would be written as USD{vault.secret-name} , and the file name looked up would be sso__realm_secret-name . Note the underscore doubled in realm name. To use this type of secret store, you must declare the files-plaintext vault provider in the standalone.xml file and set its parameter for the directory containing the mounted volume. This example shows the files-plaintext provider with the directory where vault files are searched set to standalone/configuration/vault relative to the Red Hat Single Sign-On base directory: <spi name="vault"> <default-provider>files-plaintext</default-provider> <provider name="files-plaintext" enabled="true"> <properties> <property name="dir" value="USD{jboss.home.dir}/standalone/configuration/vault/" /> </properties> </provider> </spi> Here is the equivalent configuration using CLI commands: /subsystem=keycloak-server/spi=vault/:add /subsystem=keycloak-server/spi=vault/provider=files-plaintext/:add(enabled=true,properties={dir => "USD{jboss.home.dir}/standalone/configuration/vault"}) /subsystem=keycloak-server/spi=vault:write-attribute(name=default-provider,value=files-plaintext) 13.2. Elytron credential store vault provider Red Hat Single Sign-On also provides support for reading secrets stored in an Elytron credential store. The elytron-cs-keystore vault provider can retrieve secrets from the credential store's keystore based implementation, which is also the default implementation Elytron provides. A keystore backs this credential store. JCEKS is the default format, but you can use other formats such as PKCS12 . Users can create and manage the store contents using the elytron subsystem in WildFly/JBoss EAP, or the elytron-tool.sh script. To use this provider, you must declare the elytron-cs-keystore in the keycloak-server subsystem and set the location and master secret of the keystore created by Elytron. An example of the minimal configuration for the provider follows: <spi name="vault"> <default-provider>elytron-cs-keystore</default-provider> <provider name="elytron-cs-keystore" enabled="true"> <properties> <property name="location" value="USD{jboss.home.dir}/standalone/configuration/vault/credential-store.jceks" /> <property name="secret" value="secretpw1!"/> </properties> </provider> </spi> If the underlying keystore has a format different from JCEKS , you must specify this format by using the keyStoreType : <spi name="vault"> <default-provider>elytron-cs-keystore</default-provider> <provider name="elytron-cs-keystore" enabled="true"> <properties> <property name="location" value="USD{jboss.home.dir}/standalone/configuration/vault/credential-store.p12" /> <property name="secret" value="secretpw1!"/> <property name="keyStoreType" value="PKCS12"/> </properties> </provider> </spi> For the secret, the elytron-cs-keystore provider supports clear-text values and masked values by using the elytron-tool.sh script: <spi name="vault"> ... <property name="secret" value="MASK-3u2HNQaMogJJ8VP7J6gRIl;12345678;321"/> ... </spi> For more information about creating and managing elytron credential stores and masking keystore secrets, see the Elytron documentation. Note Red Hat Single Sign-On implements the elytron-cs-keystore vault provider as a WildFly extension and is available if the Red Hat Single Sign-On server runs on WildFly/JBoss EAP only. 13.3. Key resolvers All built-in providers support the configuration of key resolvers. A key resolver implements the algorithm or strategy for combining the realm name with the key, obtained from the USD{vault.key} expression, into the final entry name used to retrieve the secret from the vault. Red Hat Single Sign-On uses the keyResolvers property to configure the resolvers that the provider uses. The value is a comma-separated list of resolver names. An example of the configuration for the files-plaintext provider follows: <spi name="vault"> <default-provider>files-plaintext</default-provider> <provider name="files-plaintext" enabled="true"> <properties> <property name="dir" value="USD{jboss.home.dir}/standalone/configuration/vault/" /> <property name="keyResolvers" value="REALM_UNDERSCORE_KEY, KEY_ONLY"/> </properties> </provider> </spi> The resolvers run in the same order you declare them in the configuration. For each resolver, Red Hat Single Sign-On uses the last entry name the resolver produces, which combines the realm with the vault key to search for the vault's secret. If Red Hat Single Sign-On finds a secret, it returns the secret. If not, Red Hat Single Sign-On uses the resolver. This search continues until Red Hat Single Sign-On finds a non-empty secret or runs out of resolvers. If Red Hat Single Sign-On finds no secret, Red Hat Single Sign-On returns an empty secret. In the example, Red Hat Single Sign-On uses the REALM_UNDERSCORE_KEY resolver first. If Red Hat Single Sign-On finds an entry in the vault that using that resolver, Red Hat Single Sign-On returns that entry. If not, Red Hat Single Sign-On searches again using the KEY_ONLY resolver. If Red Hat Single Sign-On finds an entry by using the KEY_ONLY resolver, Red Hat Single Sign-On returns that entry. If Red Hat Single Sign-On uses all resolvers, Red Hat Single Sign-On returns an empty secret. A list of the currently available resolvers follows: Name Description KEY_ONLY Red Hat Single Sign-On ignores the realm name and uses the key from the vault expression. REALM_UNDERSCORE_KEY Red Hat Single Sign-On combines the realm and key by using an underscore character. Red Hat Single Sign-On escapes occurrences of underscores in the realm or key with another underscore character. For example, if the realm is called master_realm and the key is smtp_key , the combined key is master__realm_smtp__key . REALM_FILESEPARATOR_KEY Red Hat Single Sign-On combines the realm and key by using the platform file separator character. If you have not configured a resolver for the built-in providers, Red Hat Single Sign-On selects the REALM_UNDERSCORE_KEY . 13.4. Sample Configuration The following is an example of configuring a vault and credential store. The procedure involves two parts: Creating the credential store and a vault, where the credential store and vault passwords are in plain text. Updating the credential store and vault to have the password use a mask provided by elytron-tool.sh . In this example, the test target used is an LDAP instance with BIND DN credential: secret12 . The target is mapped using user federation in the realm ldaptest . 13.4.1. Configuring the credential store and vault without a mask You create the credential store and a vault where the credential store and vault passwords are in plain text. Prerequisites A running LDAP instance has BIND DN credential: secret12 . The alias uses the format <realm-name>_< key-value> when using the default key resolver. In this case, the instance is running in the realm ldaptest and ldaptest_ldap_secret is the alias that corresponds to the value ldap_secret in that realm. Note The resolver replaces underscore characters with double underscore characters in the realm and key names. For example, for the key ldaptest_ldap_secret , the final key will be ldaptest_ldap__secret . Procedure Create the Elytron credential store. [standalone@localhost:9990 /] /subsystem=elytron/credential-store=test-store:add(create=true, location=/home/test/test-store.p12, credential-reference={clear-text=testpwd1!},implementation-properties={keyStoreType=PKCS12}) Add an alias to the credential store. /subsystem=elytron/credential-store=test-store:add-alias(alias=ldaptest_ldap__secret,secret-value=secret12) Notice how the resolver causes the key ldaptest_ldap__secret to use double underscores. List the aliases from the credential store to inspect the contents of the keystore that is produced by Elytron. keytool -list -keystore /home/test/test-store.p12 -storetype PKCS12 -storepass testpwd1! Keystore type: PKCS12 Keystore provider: SUN Your keystore contains 1 entries ldaptest_ldap__secret/passwordcredential/clear/, Oct 12, 2020, SecretKeyEntry, Configure the vault SPI in Red Hat Single Sign-On. /subsystem=keycloak-server/spi=vault:add(default-provider=elytron-cs-keystore) /subsystem=keycloak-server/spi=vault/provider=elytron-cs-keystore:add(enabled=true, properties={location=>/home/test/test-store.p12, secret=>testpwd1!, keyStoreType=>PKCS12}) At this point, the vault and credentials store passwords are not masked. <spi name="vault"> <default-provider>elytron-cs-keystore</default-provider> <provider name="elytron-cs-keystore" enabled="true"> <properties> <property name="location" value="/home/test/test-store.p12"/> <property name="secret" value="testpwd1!"/> <property name="keyStoreType" value="PKCS12"/> </properties> </provider> </spi> <credential-stores> <credential-store name="test-store" location="/home/test/test-store.p12" create="true"> <implementation-properties> <property name="keyStoreType" value="PKCS12"/> </implementation-properties> <credential-reference clear-text="testpwd1!"/> </credential-store> </credential-stores> In the LDAP provider, replace binDN credential with USD{vault.ldap_secret} . Test your LDAP connection. LDAP Vault 13.4.2. Masking the password in the credential store and vault You can now update the credential store and vault to have passwords that use a mask provided by elytron-tool.sh . Create a masked password using values for the salt and the iteration parameters: USD EAP_HOME/bin/elytron-tool.sh mask --salt SALT --iteration ITERATION_COUNT --secret PASSWORD For example: elytron-tool.sh mask --salt 12345678 --iteration 123 --secret testpwd1! MASK-3BUbFEyWu0lRAu8.fCqyUk;12345678;123 Update the Elytron credential store configuration to use the masked password. /subsystem=elytron/credential-store=cs-store:write-attribute(name=credential-reference.clear-text,value="MASK-3BUbFEyWu0lRAu8.fCqyUk;12345678;123") Update the Red Hat Single Sign-On vault configuration to use the masked password. /subsystem=keycloak-server/spi=vault/provider=elytron-cs-keystore:remove() /subsystem=keycloak-server/spi=vault/provider=elytron-cs-keystore:add(enabled=true, properties={location=>/home/test/test-store.p12, secret=>"MASK-3BUbFEyWu0lRAu8.fCqyUk;12345678;123", keyStoreType=>PKCS12}) The vault and credential store are now masked: <spi name="vault"> <default-provider>elytron-cs-keystore</default-provider> <provider name="elytron-cs-keystore" enabled="true"> <properties> <property name="location" value="/home/test/test-store.p12"/> <property name="secret" value="MASK-3BUbFEyWu0lRAu8.fCqyUk;12345678;123"/> <property name="keyStoreType" value="PKCS12"/> </properties> </provider> </spi> .... ..... <credential-stores> <credential-store name="test-store" location="/home/test/test-store.p12" create="true"> <implementation-properties> <property name="keyStoreType" value="PKCS12"/> </implementation-properties> <credential-reference clear-text="MASK-3BUbFEyWu0lRAu8.fCqyUk;12345678;123"/> </credential-store> </credential-stores> You can now test the connection to the LDAP using USD{vault.ldap_secret} . Additional resources For more information about the Elytron tool, see Using Credential Stores with Elytron Client . | [
"**USD{vault.**_key_**}**",
"<spi name=\"vault\"> <default-provider>files-plaintext</default-provider> <provider name=\"files-plaintext\" enabled=\"true\"> <properties> <property name=\"dir\" value=\"USD{jboss.home.dir}/standalone/configuration/vault/\" /> </properties> </provider> </spi>",
"/subsystem=keycloak-server/spi=vault/:add /subsystem=keycloak-server/spi=vault/provider=files-plaintext/:add(enabled=true,properties={dir => \"USD{jboss.home.dir}/standalone/configuration/vault\"}) /subsystem=keycloak-server/spi=vault:write-attribute(name=default-provider,value=files-plaintext)",
"<spi name=\"vault\"> <default-provider>elytron-cs-keystore</default-provider> <provider name=\"elytron-cs-keystore\" enabled=\"true\"> <properties> <property name=\"location\" value=\"USD{jboss.home.dir}/standalone/configuration/vault/credential-store.jceks\" /> <property name=\"secret\" value=\"secretpw1!\"/> </properties> </provider> </spi>",
"<spi name=\"vault\"> <default-provider>elytron-cs-keystore</default-provider> <provider name=\"elytron-cs-keystore\" enabled=\"true\"> <properties> <property name=\"location\" value=\"USD{jboss.home.dir}/standalone/configuration/vault/credential-store.p12\" /> <property name=\"secret\" value=\"secretpw1!\"/> <property name=\"keyStoreType\" value=\"PKCS12\"/> </properties> </provider> </spi>",
"<spi name=\"vault\"> <property name=\"secret\" value=\"MASK-3u2HNQaMogJJ8VP7J6gRIl;12345678;321\"/> </spi>",
"<spi name=\"vault\"> <default-provider>files-plaintext</default-provider> <provider name=\"files-plaintext\" enabled=\"true\"> <properties> <property name=\"dir\" value=\"USD{jboss.home.dir}/standalone/configuration/vault/\" /> <property name=\"keyResolvers\" value=\"REALM_UNDERSCORE_KEY, KEY_ONLY\"/> </properties> </provider> </spi>",
"[standalone@localhost:9990 /] /subsystem=elytron/credential-store=test-store:add(create=true, location=/home/test/test-store.p12, credential-reference={clear-text=testpwd1!},implementation-properties={keyStoreType=PKCS12})",
"/subsystem=elytron/credential-store=test-store:add-alias(alias=ldaptest_ldap__secret,secret-value=secret12)",
"keytool -list -keystore /home/test/test-store.p12 -storetype PKCS12 -storepass testpwd1! Keystore type: PKCS12 Keystore provider: SUN Your keystore contains 1 entries ldaptest_ldap__secret/passwordcredential/clear/, Oct 12, 2020, SecretKeyEntry,",
"/subsystem=keycloak-server/spi=vault:add(default-provider=elytron-cs-keystore) /subsystem=keycloak-server/spi=vault/provider=elytron-cs-keystore:add(enabled=true, properties={location=>/home/test/test-store.p12, secret=>testpwd1!, keyStoreType=>PKCS12})",
"<spi name=\"vault\"> <default-provider>elytron-cs-keystore</default-provider> <provider name=\"elytron-cs-keystore\" enabled=\"true\"> <properties> <property name=\"location\" value=\"/home/test/test-store.p12\"/> <property name=\"secret\" value=\"testpwd1!\"/> <property name=\"keyStoreType\" value=\"PKCS12\"/> </properties> </provider> </spi> <credential-stores> <credential-store name=\"test-store\" location=\"/home/test/test-store.p12\" create=\"true\"> <implementation-properties> <property name=\"keyStoreType\" value=\"PKCS12\"/> </implementation-properties> <credential-reference clear-text=\"testpwd1!\"/> </credential-store> </credential-stores>",
"EAP_HOME/bin/elytron-tool.sh mask --salt SALT --iteration ITERATION_COUNT --secret PASSWORD",
"elytron-tool.sh mask --salt 12345678 --iteration 123 --secret testpwd1! MASK-3BUbFEyWu0lRAu8.fCqyUk;12345678;123",
"/subsystem=elytron/credential-store=cs-store:write-attribute(name=credential-reference.clear-text,value=\"MASK-3BUbFEyWu0lRAu8.fCqyUk;12345678;123\")",
"/subsystem=keycloak-server/spi=vault/provider=elytron-cs-keystore:remove() /subsystem=keycloak-server/spi=vault/provider=elytron-cs-keystore:add(enabled=true, properties={location=>/home/test/test-store.p12, secret=>\"MASK-3BUbFEyWu0lRAu8.fCqyUk;12345678;123\", keyStoreType=>PKCS12})",
"<spi name=\"vault\"> <default-provider>elytron-cs-keystore</default-provider> <provider name=\"elytron-cs-keystore\" enabled=\"true\"> <properties> <property name=\"location\" value=\"/home/test/test-store.p12\"/> <property name=\"secret\" value=\"MASK-3BUbFEyWu0lRAu8.fCqyUk;12345678;123\"/> <property name=\"keyStoreType\" value=\"PKCS12\"/> </properties> </provider> </spi> . .. <credential-stores> <credential-store name=\"test-store\" location=\"/home/test/test-store.p12\" create=\"true\"> <implementation-properties> <property name=\"keyStoreType\" value=\"PKCS12\"/> </implementation-properties> <credential-reference clear-text=\"MASK-3BUbFEyWu0lRAu8.fCqyUk;12345678;123\"/> </credential-store> </credential-stores>"
] | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_administration_guide/vault-administration |
Chapter 10. NetworkManager | Chapter 10. NetworkManager NetworkManager is a dynamic network control and configuration system that attempts to keep network devices and connections up and active when they are available. NetworkManager consists of a core daemon, a GNOME Notification Area applet that provides network status information, and graphical configuration tools that can create, edit and remove connections and interfaces. NetworkManager can be used to configure the following types of connections: Ethernet, wireless, mobile broadband (such as cellular 3G), and DSL and PPPoE (Point-to-Point over Ethernet). In addition, NetworkManager allows for the configuration of network aliases, static routes, DNS information and VPN connections, as well as many connection-specific parameters. Finally, NetworkManager provides a rich API via D-Bus which allows applications to query and control network configuration and state. versions of Red Hat Enterprise Linux included the Network Administration Tool , which was commonly known as system-config-network after its command-line invocation. In Red Hat Enterprise Linux 6, NetworkManager replaces the former Network Administration Tool while providing enhanced functionality, such as user-specific and mobile broadband configuration. It is also possible to configure the network in Red Hat Enterprise Linux 6 by editing interface configuration files; see Chapter 11, Network Interfaces for more information. NetworkManager may be installed by default on your version of Red Hat Enterprise Linux. To ensure that it is, run the following command as root : 10.1. The NetworkManager Daemon The NetworkManager daemon runs with root privileges and is usually configured to start up at boot time. You can determine whether the NetworkManager daemon is running by entering this command as root : The service command will report NetworkManager is stopped if the NetworkManager service is not running. To start it for the current session: Run the chkconfig command to ensure that NetworkManager starts up every time the system boots: For more information on starting, stopping and managing services and runlevels, see Chapter 12, Services and Daemons . | [
"~]# yum install NetworkManager",
"~]# service NetworkManager status NetworkManager (pid 1527) is running",
"~]# service NetworkManager start",
"~]# chkconfig NetworkManager on"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-NetworkManager |
Chapter 7. APIcast environment variables | Chapter 7. APIcast environment variables APIcast environment variables allow you to modify behavior for APIcast. The following values are supported environment variables: Note Unsupported or deprecated environment variables are not listed Some environment variable functionality may have moved to APIcast policies all_proxy, ALL_PROXY APICAST_ACCESS_LOG_BUFFER APICAST_ACCESS_LOG_FILE APICAST_BACKEND_CACHE_HANDLER APICAST_CACHE_MAX_TIME APICAST_CACHE_STATUS_CODES APICAST_CONFIGURATION_CACHE APICAST_CONFIGURATION_LOADER APICAST_CUSTOM_CONFIG APICAST_ENVIRONMENT APICAST_EXTENDED_METRICS APICAST_HTTPS_CERTIFICATE APICAST_HTTPS_CERTIFICATE_KEY APICAST_HTTPS_PORT APICAST_HTTPS_PROXY_PROTOCOL APICAST_HTTPS_VERIFY_DEPTH APICAST_HTTP_PROXY_PROTOCOL APICAST_LARGE_CLIENT_HEADER_BUFFERS APICAST_LOAD_SERVICES_WHEN_NEEDED APICAST_LOG_FILE APICAST_LOG_LEVEL APICAST_LUA_SOCKET_KEEPALIVE_REQUESTS APICAST_MANAGEMENT_API APICAST_MODULE APICAST_OIDC_LOG_LEVEL APICAST_PATH_ROUTING APICAST_PATH_ROUTING_ONLY APICAST_POLICY_BATCHER_SHARED_MEMORY_SIZE APICAST_POLICY_LOAD_PATH APICAST_PROXY_HTTPS_CERTIFICATE APICAST_PROXY_HTTPS_CERTIFICATE_KEY APICAST_PROXY_HTTPS_PASSWORD_FILE APICAST_PROXY_HTTPS_SESSION_REUSE APICAST_REPORTING_THREADS APICAST_RESPONSE_CODES APICAST_SERVICE_CACHE_SIZE APICAST_SERVICE_USD{ID}_CONFIGURATION_VERSION APICAST_SERVICES_LIST APICAST_SERVICES_FILTER_BY_URL APICAST_UPSTREAM_RETRY_CASES APICAST_WORKERS BACKEND_ENDPOINT_OVERRIDE HTTP_KEEPALIVE_TIMEOUT http_proxy HTTP_PROXY https_proxy HTTPS_PROXY no_proxy NO_PROXY OPENSSL_VERIFY OPENTRACING_CONFIG OPENTRACING_HEADER_FORWARD OPENTRACING_TRACER RESOLVER THREESCALE_CONFIG_FILE THREESCALE_DEPLOYMENT_ENV THREESCALE_PORTAL_ENDPOINT ALL_PROXY Default : no value Value : string Example : http://forward-proxy:80 Defines a HTTP proxy to be used for connecting to services if a protocol-specific proxy is not specified. Authentication is not supported. APICAST_ACCESS_LOG_BUFFER Default : no value Value : positive integer Allows access log writes to be included in chunks of bytes. The result is fewer system calls, which improves the performance of the gateway. APICAST_ACCESS_LOG_FILE Default : stdout Defines the file that will store the access logs. APICAST_BACKEND_CACHE_HANDLER Default : strict Values : strict | resilient Deprecated : Use the Caching policy instead. Defines how the authorization cache behaves when backend is unavailable. Strict will remove cached application when backend is unavailable. Resilient will do so only on getting authorization denied from backend. APICAST_CACHE_MAX_TIME Default: 1m Value: string When the response is selected to be cached in the system, the value of this variable indicates the maximum time to be cached. If the cache-control header is not set, the time to be cached will be the defined one. The format for this value is defined by the proxy_cache_valid NGINX directive . This parameter is only used by the APIs that are using a content caching policy and the request is eligible to be cached. APICAST_CACHE_STATUS_CODES Default: 200, 302 Value: string When the response code from upstream matches one of the status codes defined in this environment variable, the response content is cached in NGINX. The caching time depends on one of these values: headers cache time value, or the maximum time defined by the APICAST_CACHE_MAX_TIME environment variable. This parameter is only used by the APIs that are using a content caching policy and the request is eligible to be cached. APICAST_CONFIGURATION_CACHE Default : 0 Value : integer Specifies the interval (in seconds) that the configuration will be stored for. The value should be set to 0 (not compatible with boot value of APICAST_CONFIGURATION_LOADER ) or more than 60. For example, if APICAST_CONFIGURATION_CACHE is set to 120, the gateway will reload the configuration from the API manager every 2 minutes (120 seconds). A value < 0 disables reloading. APICAST_CONFIGURATION_LOADER Values : boot | lazy Default : lazy Defines how to load the configuration. Boot will request the configuration to the API manager when the gateway starts. Lazy will load it on demand for each incoming request (to guarantee a complete refresh on each request APICAST_CONFIGURATION_CACHE should be 0). APICAST_CUSTOM_CONFIG Deprecated : Use policies instead. Defines the name of the Lua module that implements custom logic overriding the existing APIcast logic. APICAST_ENVIRONMENT Value : string[:] Example : production:cloud-hosted APIcast should load a list of environments (or paths), separated by colons ( : ). This list can be used instead of -e or --environment parameter on the CLI and for example stored in the container image as default environment. Any value passed on the CLI overrides this variable. APICAST_EXTENDED_METRICS Default : false Value : boolean Example : "true" Enables additional information on Prometheus metrics. The following metrics have the service_id and service_system_name labels which provide more in-depth details about APIcast: total_response_time_seconds upstream_response_time_seconds upstream_status APICAST_HTTPS_CERTIFICATE Default : no value Path to a file with X.509 certificate in the PEM format for HTTPS. APICAST_HTTPS_CERTIFICATE_KEY Default : no value Path to a file with the X.509 certificate secret key in the PEM format. APICAST_HTTPS_PORT Default : no value Controls on which port APIcast should start listening for HTTPS connections. If this clashes with HTTP port it will be used only for HTTPS. APICAST_HTTPS_PROXY_PROTOCOL Default: false Values: boolean Example: true This parameter enables the Proxy Protocol for the HTTPS listener. APICAST_HTTPS_VERIFY_DEPTH Default : 1 Values : positive integers Defines the maximum length of the client certificate chain. If this parameter has 1 as its value, it is possible to include an additional certificate in the client certificate chain. For example, root certificate authority. APICAST_HTTP_PROXY_PROTOCOL Default: false Values: boolean Example: true This parameter enables the proxy protocol for the HTTP listener. APICAST_LARGE_CLIENT_HEADER_BUFFERS Default: 4 8k Value: string Sets the maximum number and size of buffers used for reading large client request header. The format for this value is defined by the large_client_header_buffers NGINX directive . APICAST_LOAD_SERVICES_WHEN_NEEDED Note In 3scale 2.14, the APICAST_LOAD_SERVICES_WHEN_NEEDED environment variable has been removed. It is now active by default. See THREESCALE_PORTAL_ENDPOINT . Default : false Values: true or 1 for true false , 0 or empty for false This option can be used when there are many services configured. However, its performance depends on additional factors, for example, the number of services, the latency between APIcast and the 3scale Admin Portal, and the Time To Live (TTL) of the configuration. By default, APIcast loads all the services each time it downloads its configuration from the Admin Portal. When this option is enabled, the configurations uses lazy loading. APIcast will only load the ones configured for the host specified in the host header of the request. Note The caching defined by APICAST_CONFIGURATION_CACHE applies. This option will be disabled when APICAST_CONFIGURATION_LOADER is boot . Not compatible with APICAST_PATH_ROUTING . APICAST_LOG_FILE Default : stderr Defines the file that contains the OpenResty error log. The file is used by bin/apicast in the error_log directive. The file path can be either absolute, or relative to the APIcast prefix directory. Note that the default prefix directory is APIcast. Refer to NGINX documentation for more information. APICAST_LOG_LEVEL Default : warn Values : debug | info | notice | warn | error | crit | alert | emerg Specifies the log level for the OpenResty logs. APICAST_LUA_SOCKET_KEEPALIVE_REQUESTS Value: positive integers Example: "1" Sets the maximum number of requests that one keepalive connection can serve. After reaching the limit, the connection closes. Note This value affects connections opened by APIcast and will not have any impact on requests proxied via APIcast. APICAST_MANAGEMENT_API Values: disabled : completely disabled, just listens on the port. status : enables the /status/ endpoint for health checks, and the /policies endpoint that shows the list of available policies. policies : enables only the /policies endpoint. debug : full API is open. The Management API is powerful and can control the APIcast configuration. You should enable the debug level only for debugging. APICAST_MODULE Default : apicast Deprecated : Use policies instead. Specifies the name of the main Lua module that implements the API gateway logic. Custom modules can override the functionality of the default apicast.lua module. See an example of how to use modules. APICAST_OIDC_LOG_LEVEL Values : debug | info | notice | warn | error | crit | alert | emerg Default : err Allows to set the log level for the logs related to OpenID Connect integration. APICAST_PATH_ROUTING Values : true or 1 for true false , 0 or empty for false When this parameter is set to true , the gateway will use path-based routing in addition to the default host-based routing. The API request will be routed to the first service that has a matching mapping rule, from the list of services for which the value of the Host header of the request matches the Public Base URL . APICAST_PATH_ROUTING_ONLY Values : true or 1 for true false , 0 or empty for false When this parameter is set to true , the gateway uses path-based routing and will not fallback to the default host-based routing. The API request is routed to the first service that has a matching mapping rule, from the list of services for which the value of the Host header of the request matches the Public Base URL . This parameter has precedence over APICAST_PATH_ROUTING . If APICAST_PATH_ROUTING_ONLY is enabled, APIcast will only do path-based routing regardless of the value of APICAST_PATH_ROUTING . APICAST_POLICY_BATCHER_SHARED_MEMORY_SIZE Note Use the minimal value of 20m. Default : 20m Value : string Sets the maximum size of shared memory used by batcher policy. The accepted size units are k and m : http { lua_shared_dict batched_reports 20m; ... } APICAST_POLICY_LOAD_PATH Default : APICAST_DIR/policies Value : string[:] Example : ~/apicast/policies:USDPWD/policies A colon ( : ) separated list of paths where APIcast should look for policies. It can be used to first load policies from a development directory or to load examples. APICAST_PROXY_HTTPS_CERTIFICATE Default : Value : string Example : /home/apicast/my_certificate.crt The path to the client SSL certificate that APIcast will use when connecting with the upstream. Notice that this certificate will be used for all the services in the configuration. APICAST_PROXY_HTTPS_CERTIFICATE_KEY Default : Value : string Example : /home/apicast/my_certificate.key The path to the key of the client SSL certificate. APICAST_PROXY_HTTPS_PASSWORD_FILE Default : Value : string Example : /home/apicast/passwords.txt Path to a file with passphrases for the SSL cert keys specified with APICAST_PROXY_HTTPS_CERTIFICATE_KEY . APICAST_PROXY_HTTPS_SESSION_REUSE Default : on Values : on : reuses SSL sessions. off : does not reuse SSL sessions. APICAST_REPORTING_THREADS Default : 0 Value : integer >= 0 Experimental : Under extreme load might have unpredictable performance and lose reports. Value greater than 0 is going to enable out-of-band reporting to backend. This is a new experimental feature for increasing performance. Client won't see the backend latency and everything will be processed asynchronously. This value determines how many asynchronous reports can be running simultaneously before the client is throttled by adding latency. APICAST_RESPONSE_CODES Default : <empty> ( false ) Values : true or 1 for true false , 0 or empty for false When set to true , APIcast will log the response code of the response returned by the API backend in 3scale. For more information, see Setting up and evaluating the 3scale API Management response codes log for your API . APICAST_SERVICE_CACHE_SIZE Default : 1000 Values : integer >= 0 Specifies the number of services that APIcast can store in the internal cache. A high value has a performance impact because Lua's lru cache will initialize all the entries. APICAST_SERVICE_USD{ID}_CONFIGURATION_VERSION Replace USD{ID} with the actual Service ID. The value should be the configuration version you can see in the configuration history on the Admin Portal. Setting it to a particular version will prevent it from auto-updating and will always use that version. APICAST_SERVICES_LIST Value : a comma-separated list of service IDs The APICAST_SERVICES_LIST environment variable is used to filter the services you configure in the 3scale API Manager. This only applies the configuration for specific services in the gateway, discarding those service identifiers that are not specified in the list. You can find service identifiers for your product in the Admin Portal under Products > [Your_product_name] > Overview , then see Configuration, Methods and Settings and the ID for API calls . APICAST_SERVICES_FILTER_BY_URL Value : a PCRE (Perl Compatible Regular Expression) such as .*.example.com . Filters the services configured in the 3scale API Manager. This filter matches with the Public Base URL , either staging or production. Services that do not match the filter are discarded. If the regular expression cannot be compiled, no services are loaded. Note If a service does not match but is included in the section called " APICAST_SERVICES_LIST " , the service will not be discarded. Example 7.1. A Regexp filter applied to backend endpoints The Regexp filter http://.*.foo.dev is applied to the following backend endpoints: http://staging.foo.dev http://staging.bar.dev http://prod.foo.dev http://prod.bar.dev In this case, 1 and 3 are configured in APIcast and 2 and 4 are discarded. APICAST_UPSTREAM_RETRY_CASES Values : error | timeout | invalid_header | http_500 | http_502 | http_503 | http_504 | http_403 | http_404 | http_429 | non_idempotent | off Note This is only used when the retry policy is configured and specifies when a request to the upstream API should be retried. It accepts the same values as Nginx's PROXY_NEXT_UPSTREAM Module . APICAST_WORKERS Default : auto Values : integer | auto This is the value that will be used in the nginx worker_processes directive . By default, APIcast uses auto , except for the development environment where 1 is used. BACKEND_ENDPOINT_OVERRIDE URI that overrides backend endpoint from the configuration. Useful when deploying outside OpenShift deployed AMP. Example : https://backend.example.com . HTTP_KEEPALIVE_TIMEOUT Default : 75 Value : positive integers Example : 1 This parameter sets a timeout during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections. By default, the gateway keeps HTTP_KEEPALIVE_TIMEOUT disabled. This configuration allows the use of the keepalive timeout from NGINX whose default value is 75 seconds . http_proxy , HTTP_PROXY Default : no value Value : string Example : http://forward-proxy:80 Defines a HTTP proxy to be used for connecting to HTTP services. Authentication is not supported. https_proxy , HTTPS_PROXY Default : no value Value : string Example : https://forward-proxy:443 Defines a HTTP proxy to be used for connecting to HTTPS services. Authentication is not supported. no_proxy , NO_PROXY Default : no value Value : string\[,<string>\]; * Example : foo,bar.com,.extra.dot.com Defines a comma-separated list of hostnames and domain names for which the requests should not be proxied. Setting to a single * character, which matches all hosts, effectively disables the proxy. OPENSSL_VERIFY Values : 0 , false : disable peer verification 1 , true : enable peer verification Controls the OpenSSL Peer Verification. It is off by default, because OpenSSL can't use system certificate store. It requires custom certificate bundle and adding it to trusted certificates. It is recommended to use https://github.com/openresty/lua-nginx-module#lua_ssl_trusted_certificate and point to to certificate bundle generated by export-builtin-trusted-certs . OPENTRACING_CONFIG This environment variable is used to determine the config file for the opentracing tracer, if OPENTRACING_TRACER is not set, this variable will be ignored. Each tracer has a default configuration file: * jaeger: conf.d/opentracing/jaeger.example.json You can choose to mount a different configuration than the provided by default by setting the file path using this variable. Example : /tmp/jaeger/jaeger.json OPENTRACING_HEADER_FORWARD Default : uber-trace-id This environment variable controls the HTTP header used for forwarding opentracing information, this HTTP header will be forwarded to upstream servers. OPENTRACING_TRACER Example : jaeger This environment variable controls which tracing library will be loaded, right now, there's only one opentracing tracer available, jaeger . If empty, opentracing support will be disabled. RESOLVER Allows to specify a custom DNS resolver that will be used by OpenResty. If the RESOLVER parameter is empty, the DNS resolver will be autodiscovered. THREESCALE_CONFIG_FILE Path to the JSON file with the configuration for the gateway. You must provide either THREESCALE_PORTAL_ENDPOINT or THREESCALE_CONFIG_FILE for the gateway to run successfully. From these two environment variables, THREESCALE_CONFIG_FILE takes precedence To build the file with the configuration for the gateway, you have two alternatives depending on the number of services: Use the available 3scale API endpoints: Either Proxy Config Show , Proxy Config Show Latest , or Proxy Configs List . You must know the ID of the service. Use the following options: Use the Proxy Configs List provider endpoint: <schema>://<admin-portal>/admin/api/account/proxy_configs/<env>.json The endpoint returns all stored proxy configs of the provider, not only the latest of each service. Iterate over the array of proxy_configs returned in JSON. Select the proxy_config.content whose proxy_config.version is the highest among all proxy_configs with the same proxy_config.content.id . The ID is the one of the service. Then, iterate over the services to build a configuration file. For this step, use the available 3scale API endpoints, or the equivalent 3scale toolbox commands . When you deploy the gateway using a container image: Configure the file to the image as a read-only volume. Specify the path that indicates where you have mounted the volume. You can find sample configuration files in examples folder. THREESCALE_DEPLOYMENT_ENV Default : production Values : staging | production The value of this environment variable defines the environment from which the configuration will be downloaded from; this is either 3scale staging or production, when using new APIcast. The value will also be used in the header X-3scale-User-Agent in the authorize/report requests made to 3scale Service Management API. It is used by 3scale solely for statistics. THREESCALE_PORTAL_ENDPOINT URI that includes your password and portal endpoint in the following format: <schema>://<password>@<admin-portal> . where: <password> can be either the provider key or an access token for the 3scale Account Management API. <admin-portal> is the URL address to log into the 3scale Admin Portal. Example : https://[email protected] . When the THREESCALE_PORTAL_ENDPOINT environment variable is provided and APICAST_CONFIGURATION_LOADER = boot , the gateway downloads the configuration from 3scale on initializing. The configuration includes all the settings provided on the integration page of the APIs under [Your_product_name] > Integration . You can also use this environment variable to create a single gateway with the Master Admin Portal . To run the gateway successfully, you must provide either THREESCALE_PORTAL_ENDPOINT or THREESCALE_CONFIG_FILE . Note that THREESCALE_CONFIG_FILE takes precedence over THREESCALE_PORTAL_ENDPOINT . Note In 3scale 2.14, the APICAST_LOAD_SERVICES_WHEN_NEEDED environment variable has been removed. It is now active by default. The configuration is fetched when needed by default. The following are logic specifications: THREESCALE_PORTAL_ENDPOINT does not include a path in the URL. APICAST_CONFIGURATION_LOADER = boot http://USD{THREESCALE_PORTAL_ENDPOINT}/admin/api/account/proxy_configs/<env>.json?version=latest The /admin/api/account/proxy_configs/USD{env} endpoint is paginated. APICAST_CONFIGURATION_LOADER = lazy http://USD{THREESCALE_PORTAL_ENDPOINT}/admin/api/account/proxy_configs/<env>.json?host=<hostname_of_the_request>&version=latest The /admin/api/account/proxy_configs/USD{env} endpoint is paginated. THREESCALE_PORTAL_ENDPOINT includes a path in the URL with the master endpoint, that is, /master/api/proxy/configs APICAST_CONFIGURATION_LOADER = boot http://USD{THREESCALE_PORTAL_ENDPOINT}/<env>.json This is for all services from master endpoint, which is the same as the current behavior. APICAST_CONFIGURATION_LOADER = lazy http://USD{THREESCALE_PORTAL_ENDPOINT}/<env>.json?host=<hostname_of_the_request> This is for all services that match host from master endpoint, which is the same as the current behavior. Table 7.1. The path appended to USD{THREESCALE_PORTAL_ENDPOINT}: APICAST_CONFIGURATION_LOADER = boot APICAST_CONFIGURATION_LOADER = lazy endpoint has no path /admin/api/account/proxy_configs/USD{env}.json?version=version /admin/api/account/proxy_configs/USD{env}.json?host=host&version=version endpoint has a path /USD{env}.json /USD{env}.json?host=host | [
"http { lua_shared_dict batched_reports 20m; }",
"http://USD{THREESCALE_PORTAL_ENDPOINT}/admin/api/account/proxy_configs/<env>.json?version=latest",
"http://USD{THREESCALE_PORTAL_ENDPOINT}/admin/api/account/proxy_configs/<env>.json?host=<hostname_of_the_request>&version=latest",
"http://USD{THREESCALE_PORTAL_ENDPOINT}/<env>.json",
"http://USD{THREESCALE_PORTAL_ENDPOINT}/<env>.json?host=<hostname_of_the_request>"
] | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/administering_the_api_gateway/apicast_environment_variables |
Chapter 8. Patient Portal | Chapter 8. Patient Portal A simple database-backed web application that runs in the public cloud but keeps its data in a private database This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites. Overview This example is a simple database-backed web application that shows how you can use Skupper to access a database at a remote site without exposing it to the public internet. It contains three services: A PostgreSQL database running on a bare-metal or virtual machine in a private data center. A payment-processing service running on Kubernetes in a private data center. A web frontend service running on Kubernetes in the public cloud. It uses the PostgreSQL database and the payment-processing service. The example uses two Kubernetes namespaces, private and public , to represent the Kubernetes cluster in the private data center and the cluster in the public cloud. It uses Podman to run the database. Prerequisites The kubectl command-line tool, version 1.15 or later ( installation guide ) Access to at least one Kubernetes cluster, from any provider you choose Procedure Clone the repo for this example. Install the Skupper command-line tool Set up your Kubernetes namespaces Set up your Podman network Deploy the application Create your sites Link your sites Expose application services Access the frontend Clone the repo for this example. Navigate to the appropriate GitHub repository from https://skupper.io/examples/index.html and clone the repository. Install the Skupper command-line tool This example uses the Skupper command-line tool to deploy Skupper. You need to install the skupper command only once for each development environment. See the Installation for details about installing the CLI. For configured systems, use the following command: Set up your Kubernetes namespaces Skupper is designed for use with multiple Kubernetes namespaces, usually on different clusters. The skupper and kubectl commands use your kubeconfig and current context to select the namespace where they operate. Your kubeconfig is stored in a file in your home directory. The skupper and kubectl commands use the KUBECONFIG environment variable to locate it. A single kubeconfig supports only one active context per user. Since you will be using multiple contexts at once in this exercise, you need to create distinct kubeconfigs. For each namespace, open a new terminal window. In each terminal, set the KUBECONFIG environment variable to a different path and log in to your cluster. Then create the namespace you wish to use and set the namespace on your current context. Note The login procedure varies by provider. See the documentation for yours: Amazon Elastic Kubernetes Service (EKS) Azure Kubernetes Service (AKS) Google Kubernetes Engine (GKE) IBM Kubernetes Service OpenShift Public: Private: Set up your Podman network Open a new terminal window and set the SKUPPERPLATFORM environment variable to podman . This sets the Skupper platform to Podman for this terminal session. Use podman network create to create the Podman network that Skupper will use. Use systemctl to enable the Podman API service. Podman: If the systemctl command doesn't work, you can try the podman system service command instead: Deploy the application Use kubectl apply to deploy the frontend and payment processor on Kubernetes. Use podman run to start the database on your local machine. Note It is important to name your running container using --name to avoid a collision with the container that Skupper creates for accessing the service. Note You must use --network skupper with the podman run command. Public: Private: Podman: Create your sites Public: Private: Podman: Link your sites Creating a link requires use of two skupper commands in conjunction, skupper token create and skupper link create . The skupper token create command generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote site, The skupper link create command uses the token to create a link to the site that generated it. Note The link token is truly a secret. Anyone who has the token can link to your site. Make sure that only those you trust have access to it. First, use skupper token create in site Public to generate the token. Then, use skupper link create in site Private to link the sites. Public: Private: Podman: If your terminal sessions are on different machines, you may need to use scp or a similar tool to transfer the token securely. By default, tokens expire after a single use or 15 minutes after creation. Expose application services In Private, use skupper expose to expose the payment processor service. In Podman, use skupper service create and skupper service bind to expose the database on the Skupper network. Then, in Public, use skupper service create to make it available. Note Podman sites do not automatically replicate services to remote sites. You need to use skupper service create on each site where you wish to make a service available. Private: Podman: Public: Access the frontend In order to use and test the application, we need external access to the frontend. Use kubectl expose with --type LoadBalancer to open network access to the frontend service. Once the frontend is exposed, use kubectl get service/frontend to look up the external IP of the frontend service. If the external IP is <pending> , try again after a moment. Once you have the external IP, use curl or a similar tool to request the /api/health endpoint at that address. Note The <external-ip> field in the following commands is a placeholder. The actual value is an IP address. Public: Sample output: If everything is in order, you can now access the web interface by navigating to http://<external-ip>:8080/ in your browser. | [
"sudo dnf install skupper-cli",
"export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public",
"export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private",
"export SKUPPERPLATFORM=podman network create skupper systemctl --user enable --now podman.socket",
"system service --time=0 unix://USDXDGRUNTIMEDIR/podman/podman.sock &",
"apply -f frontend/kubernetes.yaml",
"apply -f payment-processor/kubernetes.yaml",
"run --name database-target --network skupper --detach --rm -p 5432:5432 quay.io/skupper/patient-portal-database",
"skupper init",
"skupper init --ingress none",
"skupper init --ingress none",
"skupper token create --uses 2 ~/secret.token",
"skupper link create ~/secret.token",
"skupper link create ~/secret.token",
"skupper expose deployment/payment-processor --port 8080",
"skupper service create database 5432 skupper service bind database host database-target --target-port 5432",
"skupper service create database 5432",
"expose deployment/frontend --port 8080 --type LoadBalancer get service/frontend curl http://<external-ip>:8080/api/health",
"kubectl expose deployment/frontend --port 8080 --type LoadBalancer service/frontend exposed kubectl get service/frontend NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend LoadBalancer 10.103.232.28 <external-ip> 8080:30407/TCP 15s curl http://<external-ip>:8080/api/health OK"
] | https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/examples/patient_portal |
4.12. Fence Virt (Multicast Mode) | 4.12. Fence Virt (Multicast Mode) Table 4.13, "Fence virt (Multicast Mode) " lists the fence device parameters used by fence_xvm , the fence agent for virtual machines using multicast. Table 4.13. Fence virt (Multicast Mode) luci Field cluster.conf Attribute Description Name name A name for the Fence virt fence device. Timeout timeout Fencing timeout, in seconds. The default value is 30. Domain port (formerly domain ) Virtual machine (domain UUID or name) to fence. Delay (optional) delay Fencing delay, in seconds. The fence agent will wait the specified number of seconds before attempting a fencing operation. The default value is 0. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-virt-multicast-CA |
3.11. Creating a vNIC | 3.11. Creating a vNIC To ensure that a newly created virtual machine has network access you must create and attach a vNIC. This Ruby example creates a vNIC and attaches it to an existing virtual machine, myvm . # Find the root of the tree of services: system_service = connection.system_service # Find the virtual machine: vms_service = system_service.vms_service vm = vms_service.list(search: 'name=myvm').first # In order to specify the network that the new NIC will be connected to, you must # specify the identifier of the vNIC profile. However, there may be multiple # profiles with the same name (for different data centers, for example), so first # you must find the networks that are available in the cluster that the # virtual machine belongs to. cluster = connection.follow_link(vm.cluster) networks = connection.follow_link(cluster.networks) network_ids = networks.map(&:id) # Now that you know what networks are available in the cluster, you can select a # vNIC profile that corresponds to one of those networks, and has the # name that you want to use. The system automatically creates a vNIC # profile for each network, with the same name as the network. profiles_service = system_service.vnic_profiles_service profiles = profiles_service.list profile = profiles.detect { |p| network_ids.include?(p.network.id) && p.name == 'myprofile' } # Locate the service that manages the network interface cards collection of the # virtual machine: nics_service = vms_service.vm_service(vm.id).nics_service # Add the new network interface card: nics_service.add( OvirtSDK4::Nic.new( name: 'mynic', description: 'My network interface card', vnic_profile: { id: profile.id } ) ) For more information, see VmsService:add . | [
"Find the root of the tree of services: system_service = connection.system_service Find the virtual machine: vms_service = system_service.vms_service vm = vms_service.list(search: 'name=myvm').first In order to specify the network that the new NIC will be connected to, you must specify the identifier of the vNIC profile. However, there may be multiple profiles with the same name (for different data centers, for example), so first you must find the networks that are available in the cluster that the virtual machine belongs to. cluster = connection.follow_link(vm.cluster) networks = connection.follow_link(cluster.networks) network_ids = networks.map(&:id) Now that you know what networks are available in the cluster, you can select a vNIC profile that corresponds to one of those networks, and has the name that you want to use. The system automatically creates a vNIC profile for each network, with the same name as the network. profiles_service = system_service.vnic_profiles_service profiles = profiles_service.list profile = profiles.detect { |p| network_ids.include?(p.network.id) && p.name == 'myprofile' } Locate the service that manages the network interface cards collection of the virtual machine: nics_service = vms_service.vm_service(vm.id).nics_service Add the new network interface card: nics_service.add( OvirtSDK4::Nic.new( name: 'mynic', description: 'My network interface card', vnic_profile: { id: profile.id } ) )"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/ruby_sdk_guide/creating_a_vnic |
Chapter 8. Host management and monitoring using the RHEL web console | Chapter 8. Host management and monitoring using the RHEL web console You can use the RHEL web console interactive web interface to perform actions and monitor Red Hat Enterprise Linux hosts. You can enable a remote-execution feature to integrate Satellite with the RHEL web console. When you install the RHEL web console on a host that you manage with Satellite, you can view the RHEL web console dashboards of that host from within the Satellite web UI. You can also use the features that are integrated with the RHEL web console, for example, Red Hat Image Builder. 8.1. Enabling the RHEL web console on Satellite By default, RHEL web console integration is disabled in Satellite. If you want to access RHEL web console features for your hosts from within Satellite, you must first enable the RHEL web console on Satellite Server. Procedure Enable the RHEL web console on your Satellite Server: 8.2. Managing and monitoring hosts using the RHEL web console You can access the RHEL web console UI through the Satellite web UI and use the functionality to manage and monitor hosts in Satellite. Prerequisites You have enabled the RHEL web console in Satellite. You have installed the RHEL web console on the host that you want to view: For Red Hat Enterprise Linux 8, see Installing the web console in the Managing systems using the RHEL 8 web console guide. For Red Hat Enterprise Linux 7, see Installing the web console in the Managing systems using the RHEL 7 web console guide. Satellite or Capsule can authenticate to the host with SSH keys. For more information, see Section 12.14, "Distributing SSH keys for remote execution" . Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the host that you want to manage and monitor with the RHEL web console. In the upper right of the host window, click the vertical ellipsis and select Web Console . You can now access the full range of features available for host monitoring and management, for example, Red Hat Image Builder, through the RHEL web console. For more information about getting started with Red Hat web console, see the Managing systems using the RHEL 8 web console guide or the Managing systems using the RHEL 7 web console guide. For more information about using Red Hat Image Builder through the RHEL web console, see Accessing Image Builder GUI in the RHEL 8 web console or Accessing Image Builder GUI in the RHEL 7 web console . 8.3. Disabling the RHEL web console on Satellite Perform the following procedure if you want to disable the RHEL web console on Satellite. Procedure Disable the RHEL web console on your Satellite Server: Important You can enable or disable RHEL web console integration independently on Capsule Servers. To prevent enabling RHEL web console integration on a Capsule Server, enter the following command after completing the procedure: | [
"satellite-installer --enable-foreman-plugin-remote-execution-cockpit --reset-foreman-plugin-remote-execution-cockpit-ensure",
"satellite-installer --foreman-plugin-remote-execution-cockpit-ensure absent",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-cockpit-integration false"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_hosts/host_management_and_monitoring_using_cockpit_managing-hosts?extIdCarryOver=true&intcmp=701f20000012ngPAAQ&sc_cid=701f2000001Css5AAC |
Chapter 8. Technology Previews | Chapter 8. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 8.6. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . 8.1. RHEL for Edge FDO process available as a Technology Preview The FDO process for automatic provisioning and onboarding RHEL for Edge images is available as a Technology Preview. With that, you can build a RHEL for Edge Simplified Installer image, provision it to a RHEL for Edge image, and use the FDO (FIDO device onboarding) process to automatically provision and onboard your Edge devices, exchange data with other devices and systems connected on the networks. As a result, the FIDO device onboarding protocol performs device initialization at the manufacturing stage and then late binding to actually use the device. (BZ#1989930) 8.2. Shells and command-line tools ReaR available on the 64-bit IBM Z architecture as a Technology Preview Basic Relax and Recover (ReaR) functionality is now available on the 64-bit IBM Z architecture as a Technology Preview. You can create a ReaR rescue image on IBM Z only in the z/VM environment. Backing up and recovering logical partitions (LPARs) has not been tested. The only output method currently available is Initial Program Load (IPL). IPL produces a kernel and an initial ramdisk (initrd) that can be used with the zIPL bootloader. Warning Currently, the rescue process reformats all the DASDs (Direct Attached Storage Devices) connected to the system. Do not attempt a system recovery if there is any valuable data present on the system storage devices. This also includes the device prepared with the zIPL bootloader, ReaR kernel, and initrd that were used to boot into the rescue environment. Ensure to keep a copy. For more information, see Using a ReaR rescue image on the 64-bit IBM Z architecture . (BZ#1868421) 8.3. Networking KTLS available as a Technology Preview RHEL provides Kernel Transport Layer Security (KTLS) as a Technology Preview. KTLS handles TLS records using the symmetric encryption or decryption algorithms in the kernel for the AES-GCM cipher. KTLS also includes the interface for offloading TLS record encryption to Network Interface Controllers (NICs) that provides this functionality. (BZ#1570255) AF_XDP available as a Technology Preview Address Family eXpress Data Path ( AF_XDP ) socket is designed for high-performance packet processing. It accompanies XDP and grants efficient redirection of programmatically selected packets to user space applications for further processing. (BZ#1633143) XDP features that are available as Technology Preview Red Hat provides the usage of the following eXpress Data Path (XDP) features as unsupported Technology Preview: Loading XDP programs on architectures other than AMD and Intel 64-bit. Note that the libxdp library is not available for architectures other than AMD and Intel 64-bit. The XDP hardware offloading. ( BZ#1889737 ) Multi-protocol Label Switching for TC available as a Technology Preview The Multi-protocol Label Switching (MPLS) is an in-kernel data-forwarding mechanism to route traffic flow across enterprise networks. In an MPLS network, the router that receives packets decides the further route of the packets based on the labels attached to the packet. With the usage of labels, the MPLS network has the ability to handle packets with particular characteristics. For example, you can add tc filters for managing packets received from specific ports or carrying specific types of traffic, in a consistent way. After packets enter the enterprise network, MPLS routers perform multiple operations on the packets, such as push to add a label, swap to update a label, and pop to remove a label. MPLS allows defining actions locally based on one or multiple labels in RHEL. You can configure routers and set traffic control ( tc ) filters to take appropriate actions on the packets based on the MPLS label stack entry ( lse ) elements, such as label , traffic class , bottom of stack , and time to live . For example, the following command adds a filter to the enp0s1 network interface to match incoming packets having the first label 12323 and the second label 45832 . On matching packets, the following actions are taken: the first MPLS TTL is decremented (packet is dropped if TTL reaches 0) the first MPLS label is changed to 549386 the resulting packet is transmitted over enp0s2 , with destination MAC address 00:00:5E:00:53:01 and source MAC address 00:00:5E:00:53:02 (BZ#1814836, BZ#1856415 ) The systemd-resolved service is now available as a Technology Preview The systemd-resolved service provides name resolution to local applications. The service implements a caching and validating DNS stub resolver, an Link-Local Multicast Name Resolution (LLMNR), and Multicast DNS resolver and responder. Note that, even if the systemd package provides systemd-resolved , this service is an unsupported Technology Preview. ( BZ#1906489 ) 8.4. Kernel The kexec fast reboot feature is available as a Technology Preview The kexec fast reboot feature continues to be available as a Technology Preview. The kexec fast reboot significantly speeds the boot process as the kernel enables booting directly into the second kernel without passing through the Basic Input/Output System (BIOS) first. To use this feature: Load the kexec kernel manually. Reboot the operating system. ( BZ#1769727 ) The accel-config package available as a Technology Preview The accel-config package is now available on Intel EM64T and AMD64 architectures as a Technology Preview. This package helps in controlling and configuring data-streaming accelerator (DSA) sub-system in the Linux Kernel. Also, it configures devices through sysfs (pseudo-filesystem), saves and loads the configuration in the json format. (BZ#1843266) SGX available as a Technology Preview Software Guard Extensions (SGX) is an Intel(R) technology for protecting software code and data from disclosure and modification. The RHEL kernel partially provides the SGX v1 and v1.5 functionality. The version 1 enables platforms using the Flexible Launch Control mechanism to use the SGX technology. (BZ#1660337) eBPF available as a Technology Preview Extended Berkeley Packet Filter (eBPF) is an in-kernel virtual machine that allows code execution in the kernel space, in the restricted sandbox environment with access to a limited set of functions. The virtual machine includes a new system call bpf() , which enables creating various types of maps, and also allows to load programs in a special assembly-like code. The code is then loaded to the kernel and translated to the native machine code with just-in-time compilation. Note that the bpf() syscall can be successfully used only by a user with the CAP_SYS_ADMIN capability, such as the root user. See the bpf(2) manual page for more information. The loaded programs can be attached onto a variety of points (sockets, tracepoints, packet reception) to receive and process data. There are numerous components shipped by Red Hat that utilize the eBPF virtual machine. Each component is in a different development phase. All components are available as a Technology Preview, unless a specific component is indicated as supported. The following notable eBPF components are currently available as a Technology Preview: AF_XDP , a socket for connecting the eXpress Data Path (XDP) path to user space for applications that prioritize packet processing performance. (BZ#1559616) The Intel data streaming accelerator driver for kernel is available as a Technology Preview The Intel data streaming accelerator driver (IDXD) for the kernel is currently available as a Technology Preview. It is an Intel CPU integrated accelerator and includes a shared work queue with process address space ID (pasid) submission and shared virtual memory (SVM). (BZ#1837187) Soft-RoCE available as a Technology Preview Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE) is a network protocol that implements RDMA over Ethernet. Soft-RoCE is the software implementation of RoCE which maintains two protocol versions, RoCE v1 and RoCE v2. The Soft-RoCE driver, rdma_rxe , is available as an unsupported Technology Preview in RHEL 8. (BZ#1605216) The stmmac driver is available as a Technology Preview Red Hat provides the usage of stmmac for Intel(R) Elkhart Lake systems on a chip (SoCs) as an unsupported Technology Preview. (BZ#1905243) 8.5. File systems and storage File system DAX is now available for ext4 and XFS as a Technology Preview In Red Hat Enterprise Linux 8, the file system DAX is available as a Technology Preview. DAX provides a means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a file system that provides the capability of DAX must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, a mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. (BZ#1627455) OverlayFS OverlayFS is a type of union file system. It enables you to overlay one file system on top of another. Changes are recorded in the upper file system, while the lower file system remains unmodified. This allows multiple users to share a file-system image, such as a container or a DVD-ROM, where the base image is on read-only media. OverlayFS remains a Technology Preview under most circumstances. As such, the kernel logs warnings when this technology is activated. Full support is available for OverlayFS when used with supported container engines ( podman , cri-o , or buildah ) under the following restrictions: OverlayFS is supported for use only as a container engine graph driver or other specialized use cases, such as squashed kdump initramfs. Its use is supported primarily for container COW content, not for persistent storage. You must place any persistent storage on non-OverlayFS volumes. You can use only the default container engine configuration: one level of overlay, one lowerdir, and both lower and upper levels are on the same file system. Only XFS is currently supported for use as a lower layer file system. Additionally, the following rules and limitations apply to using OverlayFS: The OverlayFS kernel ABI and user-space behavior are not considered stable, and might change in future updates. OverlayFS provides a restricted set of the POSIX standards. Test your application thoroughly before deploying it with OverlayFS. The following cases are not POSIX-compliant: Lower files opened with O_RDONLY do not receive st_atime updates when the files are read. Lower files opened with O_RDONLY , then mapped with MAP_SHARED are inconsistent with subsequent modification. Fully compliant st_ino or d_ino values are not enabled by default on RHEL 8, but you can enable full POSIX compliance for them with a module option or mount option. To get consistent inode numbering, use the xino=on mount option. You can also use the redirect_dir=on and index=on options to improve POSIX compliance. These two options make the format of the upper layer incompatible with an overlay without these options. That is, you might get unexpected results or errors if you create an overlay with redirect_dir=on or index=on , unmount the overlay, then mount the overlay without these options. To determine whether an existing XFS file system is eligible for use as an overlay, use the following command and see if the ftype=1 option is enabled: SELinux security labels are enabled by default in all supported container engines with OverlayFS. Several known issues are associated with OverlayFS in this release. For details, see Non-standard behavior in the Linux kernel documentation . For more information about OverlayFS, see the Linux kernel documentation . (BZ#1690207) Stratis is now available as a Technology Preview Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user. Stratis enables you to more easily perform storage tasks such as: Manage snapshots and thin provisioning Automatically grow file system sizes as needed Maintain file systems To administer Stratis storage, use the stratis utility, which communicates with the stratisd background service. Stratis is provided as a Technology Preview. For more information, see the Stratis documentation: Setting up Stratis file systems . RHEL 8.3 updated Stratis to version 2.1.0. For more information, see Stratis 2.1.0 Release Notes . (JIRA:RHELPLAN-1212) Setting up a Samba server on an IdM domain member is provided as a Technology Preview With this update, you can now set up a Samba server on an Identity Management (IdM) domain member. The new ipa-client-samba utility provided by the same-named package adds a Samba-specific Kerberos service principal to IdM and prepares the IdM client. For example, the utility creates the /etc/samba/smb.conf with the ID mapping configuration for the sss ID mapping back end. As a result, administrators can now set up Samba on an IdM domain member. Due to IdM Trust Controllers not supporting the Global Catalog Service, AD-enrolled Windows hosts cannot find IdM users and groups in Windows. Additionally, IdM Trust Controllers do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access the Samba shares and printers from IdM clients. For details, see Setting up Samba on an IdM domain member . (JIRA:RHELPLAN-13195) NVMe/TCP host is available as a Technology Preview Accessing and sharing Nonvolatile Memory Express (NVMe) storage over TCP/IP networks (NVMe/TCP) and its corresponding nvme_tcp.ko kernel module has been added as a Technology Preview. The use of NVMe/TCP as a host is manageable with tools provided by the nvme-cli package. The NVMe/TCP host Technology Preview is included only for testing purposes and is not currently planned for full support. (BZ#1696451) 8.6. High availability and clusters Pacemaker podman bundles available as a Technology Preview Pacemaker container bundles now run on Podman, with the container bundle feature being available as a Technology Preview. There is one exception to this feature being Technology Preview: Red Hat fully supports the use of Pacemaker bundles for Red Hat Openstack. (BZ#1619620) Heuristics in corosync-qdevice available as a Technology Preview Heuristics are a set of commands executed locally on startup, cluster membership change, successful connect to corosync-qnetd , and, optionally, on a periodic basis. When all commands finish successfully on time (their return error code is zero), heuristics have passed; otherwise, they have failed. The heuristics result is sent to corosync-qnetd where it is used in calculations to determine which partition should be quorate. ( BZ#1784200 ) New fence-agents-heuristics-ping fence agent As a Technology Preview, Pacemaker now provides the fence_heuristics_ping agent. This agent aims to open a class of experimental fence agents that do no actual fencing by themselves but instead exploit the behavior of fencing levels in a new way. If the heuristics agent is configured on the same fencing level as the fence agent that does the actual fencing but is configured before that agent in sequence, fencing issues an off action on the heuristics agent before it attempts to do so on the agent that does the fencing. If the heuristics agent gives a negative result for the off action it is already clear that the fencing level is not going to succeed, causing Pacemaker fencing to skip the step of issuing the off action on the agent that does the fencing. A heuristics agent can exploit this behavior to prevent the agent that does the actual fencing from fencing a node under certain conditions. A user might want to use this agent, especially in a two-node cluster, when it would not make sense for a node to fence the peer if it can know beforehand that it would not be able to take over the services properly. For example, it might not make sense for a node to take over services if it has problems reaching the networking uplink, making the services unreachable to clients, a situation which a ping to a router might detect in that case. (BZ#1775847) Automatic removal of location constraint following resource move available as a Technology Preview When you execute the pcs resource move command, this adds a constraint to the resource to prevent it from running on the node on which it is currently running. A new --autodelete option for the pcs resource move command is now available as a Technology Preview. When you specify this option, the location constraint that the command creates is automatically removed once the resource has been moved. (BZ#1847102) 8.7. Identity Management Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as a Technology Preview. Previously, the IdM API was enhanced to enable multiple versions of API commands. These enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers can use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW) . ( BZ#1664719 ) DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now implement DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2 Secure Domain Name System (DNS) Deployment Guide DNSSEC Key Rollover Timing Considerations Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices. ( BZ#1664718 ) ACME available as a Technology Preview The Automated Certificate Management Environment (ACME) service is now available in Identity Management (IdM) as a Technology Preview. ACME is a protocol for automated identifier validation and certificate issuance. Its goal is to improve security by reducing certificate lifetimes and avoiding manual processes from certificate lifecycle management. In RHEL, the ACME service uses the Red Hat Certificate System (RHCS) PKI ACME responder. The RHCS ACME subsystem is automatically deployed on every certificate authority (CA) server in the IdM deployment, but it does not service requests until the administrator enables it. RHCS uses the acmeIPAServerCert profile when issuing ACME certificates. The validity period of issued certificates is 90 days. Enabling or disabling the ACME service affects the entire IdM deployment. Important It is recommended to enable ACME only in an IdM deployment where all servers are running RHEL 8.4 or later. Earlier RHEL versions do not include the ACME service, which can cause problems in mixed-version deployments. For example, a CA server without ACME can cause client connections to fail, because it uses a different DNS Subject Alternative Name (SAN). Warning Currently, RHCS does not remove expired certificates. Because ACME certificates expire after 90 days, the expired certificates can accumulate and this can affect performance. To enable ACME across the whole IdM deployment, use the ipa-acme-manage enable command: To disable ACME across the whole IdM deployment, use the ipa-acme-manage disable command: To check whether the ACME service is installed and if it is enabled or disabled, use the ipa-acme-manage status command: (BZ#1628987) 8.8. Desktop GNOME for the 64-bit ARM architecture available as a Technology Preview The GNOME desktop environment is now available for the 64-bit ARM architecture as a Technology Preview. This enables administrators to configure and manage servers from a graphical user interface (GUI) remotely, using the VNC session. As a consequence, new administration applications are available on the 64-bit ARM architecture. For example: Disk Usage Analyzer ( baobab ), Firewall Configuration ( firewall-config ), Red Hat Subscription Manager ( subscription-manager ), or the Firefox web browser. Using Firefox , administrators can connect to the local Cockpit daemon remotely. (JIRA:RHELPLAN-27394, BZ#1667225, BZ#1667516, BZ#1724302 ) GNOME desktop on IBM Z is available as a Technology Preview The GNOME desktop, including the Firefox web browser, is now available as a Technology Preview on the IBM Z architecture. You can now connect to a remote graphical session running GNOME using VNC to configure and manage your IBM Z servers. (JIRA:RHELPLAN-27737) 8.9. Graphics infrastructures VNC remote console available as a Technology Preview for the 64-bit ARM architecture On the 64-bit ARM architecture, the Virtual Network Computing (VNC) remote console is available as a Technology Preview. Note that the rest of the graphics stack is currently unverified for the 64-bit ARM architecture. (BZ#1698565) 8.10. The web console Stratis available as a Technology Preview in the RHEL web console With this update, the Red Hat Enterprise Linux web console provides the ability to manage Stratis storage as a Technology Preview. To learn more about Stratis, see What is Stratis . (JIRA:RHELPLAN-108438) 8.11. Virtualization AMD SEV and SEV-ES for KVM virtual machines As a Technology Preview, RHEL 8 provides the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts the VM's memory to protect the VM from access by the host. This increases the security of the VM. In addition, the enhanced Encrypted State version of SEV (SEV-ES) is also provided as Technology Preview. SEV-ES encrypts all CPU register contents when a VM stops running. This prevents the host from modifying the VM's CPU registers or reading any information from them. Note that SEV and SEV-ES work only on the 2nd generation of AMD EPYC CPUs (codenamed Rome) or later. Also note that RHEL 8 includes SEV and SEV-ES encryption, but not the SEV and SEV-ES security attestation. (BZ#1501618, BZ#1501607, JIRA:RHELPLAN-7677) Intel vGPU As a Technology Preview, it is now possible to divide a physical Intel GPU device into multiple virtual devices referred to as mediated devices . These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU. Note that only selected Intel GPUs are compatible with the vGPU feature. In addition, it is possible to enable a VNC console operated by Intel vGPU. By enabling it, users can connect to a VNC console of the VM and see the VM's desktop hosted by Intel vGPU. However, this currently only works for RHEL guest operating systems. (BZ#1528684) Creating nested virtual machines Nested KVM virtualization is provided as a Technology Preview for KVM virtual machines (VMs) running on Intel, AMD64, IBM POWER, and IBM Z systems hosts with RHEL 8. With this feature, a RHEL 7 or RHEL 8 VM that runs on a physical RHEL 8 host can act as a hypervisor, and host its own VMs. (JIRA:RHELPLAN-14047, JIRA:RHELPLAN-24437) Technology Preview: Select Intel network adapters now provide SR-IOV in RHEL guests on Hyper-V As a Technology Preview, Red Hat Enterprise Linux guest operating systems running on a Hyper-V hypervisor can now use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters that are supported by the ixgbevf and iavf drivers. This feature is enabled when the following conditions are met: SR-IOV support is enabled for the network interface controller (NIC) SR-IOV support is enabled for the virtual NIC SR-IOV support is enabled for the virtual switch The virtual function (VF) from the NIC is attached to the virtual machine The feature is currently provided with Microsoft Windows Server 2016 and later. (BZ#1348508) Sharing files between hosts and VMs using virtiofs As a Technology Preview, RHEL 8 now provides the virtio file system ( virtiofs ). Using virtiofs , you can efficiently share files between your host system and its virtual machines (VM). (BZ#1741615) KVM virtualization is usable in RHEL 8 Hyper-V virtual machines As a Technology Preview, nested KVM virtualization can now be used on the Microsoft Hyper-V hypervisor. As a result, you can create virtual machines on a RHEL 8 guest system running on a Hyper-V host. Note that currently, this feature only works on Intel and AMD systems. In addition, nested virtualization is in some cases not enabled by default on Hyper-V. To enable it, see the following Microsoft documentation: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/nested-virtualization (BZ#1519039) 8.12. Containers Toolbox is available as a Technology Preview Previously, the Toolbox utility was based on RHEL CoreOS github.com/coreos/toolbox. With this release, Toolbox has been replaced with github.com/containers/toolbox. (JIRA:RHELPLAN-77238) The Netavark network stack is available as a Technology Preview Before Podman version 4.1.1-7, the Netavark network stack for containers is available as a Technology Preview. This network stack has the following capabilities: Configuration of container networks using the JSON configuration file Creating, managing, and removing network interfaces, including bridge and MACVLAN interfaces Configuring firewall settings, such as network address translation (NAT) and port mapping rules IPv4 and IPv6 Improved capability for containers in multiple networks Container DNS resolution using the aardvark-dns project Note You have to use the same version of Netavark stack and the aardvark-dns authoritative DNS server. (JIRA:RHELPLAN-137622) The podman-machine command is unsupported The podman-machine command for managing virtual machines, is available only as a Technology Preview. Instead, run Podman directly from the command line. (JIRA:RHELDOCS-16861) | [
"tc filter add dev enp0s1 ingress protocol mpls_uc flower mpls lse depth 1 label 12323 lse depth 2 label 45832 action mpls dec_ttl pipe action mpls modify label 549386 pipe action pedit ex munge eth dst set 00:00:5E:00:53:01 pipe action pedit ex munge eth src set 00:00:5E:00:53:02 pipe action mirred egress redirect dev enp0s2",
"xfs_info /mount-point | grep ftype",
"ipa-acme-manage enable The ipa-acme-manage command was successful",
"ipa-acme-manage disable The ipa-acme-manage command was successful",
"ipa-acme-manage status ACME is enabled The ipa-acme-manage command was successful"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.6_release_notes/technology_previews |
B.3. How to Set Up Red Hat Virtualization Manager to Use FCoE | B.3. How to Set Up Red Hat Virtualization Manager to Use FCoE You can configure Fibre Channel over Ethernet (FCoE) properties for host network interface cards from the Administration Portal. The fcoe key is not available by default and needs to be added to the Manager using the engine configuration tool. You can check whether fcoe has already been enabled by running the following command: # engine-config -g UserDefinedNetworkCustomProperties You also need to install the required VDSM hook package on the hosts. Depending on the FCoE card on the hosts, special configuration may also be needed; see Configuring Fibre Channel over Ethernet in Red Hat Enterprise Linux Managing storage devices . Procedure On the Manager, run the following command to add the key: # engine-config -s UserDefinedNetworkCustomProperties='fcoe=^((enable|dcb|auto_vlan)=(yes|no),?)*USD' Restart the ovirt-engine service: # systemctl restart ovirt-engine.service Install the VDSM hook package on each of the Red Hat Enterprise Linux hosts on which you want to configure FCoE properties. The package is available by default on Red Hat Virtualization Host (RHVH). # dnf install vdsm-hook-fcoe The fcoe key is now available in the Administration Portal. See Editing Host Network Interfaces and Assigning Logical Networks to Hosts to apply FCoE properties to logical networks. | [
"engine-config -g UserDefinedNetworkCustomProperties",
"engine-config -s UserDefinedNetworkCustomProperties='fcoe=^((enable|dcb|auto_vlan)=(yes|no),?)*USD'",
"systemctl restart ovirt-engine.service",
"dnf install vdsm-hook-fcoe"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/how_to_set_up_rhvm_to_use_fcoe |
Preface | Preface This document describes the different features and utilities that make Red Hat Enterprise Linux 7 an ideal enterprise platform for application development. Red Hat Enterprise Linux 7 will cease to be updated after the release of RHEL 7.9. Check the Product Life Cycle of Red Hat Software Collections for Red Hat Enterprise Linux 7 for information on the current status of RHEL 7 products. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/preface |
4.319. tcsh | 4.319. tcsh 4.319.1. RHBA-2011:1686 - tcsh bug fix update An updated tcsh package that fixes various bugs is now available for Red Hat Enterprise Linux 6. Tcsh is an enhanced and compatible version of the C shell (csh). It is a command language interpreter, which can be used as an interactive login shell, as well as a shell script command processor. Bug Fixes BZ# 700309 Under certain circumstances when using the cwd symbolic link, a null pointer may have been incorrectly dereferenced, causing the tcsh shell to terminate unexpectedly. With this update, the pointer is now checked properly and tcsh no longer crashes. BZ# 658190 This package fixes the return value of the "status" (or "USD?") variable in the case of pipelines and backquoted commands. The "anyerror" variable, which selects the behavior, has been added to retain backward compatibility. BZ# 690356 If the tcsh shell redirected standard output to a child process using a pipe and this child process terminated, the shell tried to print a message to the already-closed pipe as a high-priority event that could never been finished. As a consequence, tcsh entered an infinite loop and consumed up to 100% of the CPU. To fix this issue, this error event is now removed from the event queue before the shell tries to write it to the broken pipe. As result, the parent process terminates ordinarily. BZ# 684063 Prior to this update, the blkend() function was called redundantly in the tsch code. These calls had no effect on tcsh functionality and thus they have been removed. Tcsh functionality remains unchanged and the code is now more effective. BZ# 672592 The tcsh shell allowed variables to named in incorrect formats, such as by allowing a variable name to begin with a digit. This issue has been fixed: variable names are now verified according to Unix variable-naming conventions. All users of tcsh are advised to upgrade to this updated package, which resolves these issues. 4.319.2. RHBA-2012:0687 - tcsh bug fix update Updated tcsh packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The tcsh packages contain an enhanced and compatible version of the C shell (csh), which is a command language interpreter, and can be used as an interactive login shell, as well as a shell script command processor. Bug Fixes BZ# 791232 When using multiple shells simultaneously, the command history is saved from all shells in one ".history" file. Previously, when running multiple csh scripts at the same time, lines in the ".history" file could be reordered with different timestamps or some of the entries could disappear, rendering the ".history" file corrupted. As a consequence, start-up scripts could be slowed down. This update implements file locking mechanism that uses shared readers and an exclusive writer to prevent the ".history" file from being corrupted. BZ# 798652 Previously, the "anyerror" variable could be selected to set the tcsh exit value behavior. However, "anyerror" is a shell variable and therefore was not propagated to subshells, and could not globally affect csh scripts. This update modifies the tcsh exit value behavior to the default behavior of csh: the new "tcsh_posix_status" variable is now available instead of "anyerror" to allow behavior similar to the POSIX standard. All users of tcsh are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/tcsh |
Chapter 6. Registering Clients | Chapter 6. Registering Clients You can register a client running a Red Hat Enterprise Linux 6, 7, or 8 operating system to Capsule Servers that you configure for load balancing. For more information about registering clients and configuring them to use Puppet, see Registering Hosts in the Managing Hosts guide. To register clients, proceed to one of the following procedures: Section 6.1, "Red Hat Satellite host registration" Section 6.2, " (Deprecated) Registering Clients Using the Bootstrap Script" Section 6.3, " (Deprecated) Registering Clients Manually with katello-ca-consumer rpm" 6.1. Red Hat Satellite host registration You can register hosts with Satellite using the host registration feature, the Satellite API, or Hammer CLI. Procedure In the Satellite web UI, navigate to Hosts > Register Host . Click Generate to create the registration command. Click on the files icon to copy the command to your clipboard. Log in to the host you want register and run the previously generated command. Update subscription manager configuration for rhsm.baseurl and server.hostname : Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. CLI procedure Generate the host registration command using the Hammer CLI: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Log in to the host you want register and run the previously generated command. Update subscription manager configuration for rhsm.baseurl and server.hostname : Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. API procedure Generate the host registration command using the Satellite API: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Use an activation key to simplify specifying the environments. For more information, see Managing Activation Keys in the Content Management guide. To enter a password as command line argument, use username:password syntax. Keep in mind this can save the password in the shell history. For more information about registration see Registering a Host to Red Hat Satellite in Managing Hosts . Log in to the host you want register and run the previously generated command. Update subscription manager configuration for rhsm.baseurl and server.hostname : Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. 6.2. (Deprecated) Registering Clients Using the Bootstrap Script To register clients, enter the following command on the client. You must complete the registration procedure for each client. Prerequisite Ensure that you install the bootstrap script on the client and change the script's file permissions to executable. For more information, see Registering Hosts to Red Hat Satellite Using The Bootstrap Script in the Managing Hosts guide. On Red Hat Enterprise Linux 8, enter the following command: 1 Include the --puppet-ca-port 8141 option if you use Puppet. 2 Include the --force option to register the client that has been previously registered to a standalone Capsule. On Red Hat Enterprise Linux 7, 6, or 5, enter the following command: 1 Include the --puppet-ca-port 8141 option if you use Puppet. 2 Include the --force option to register the client that has been previously registered to a standalone Capsule. The script prompts for the password corresponding to the Satellite user name you entered with the --login option. 6.3. (Deprecated) Registering Clients Manually with katello-ca-consumer rpm To register clients manually, complete the following procedure on each client that you register. Procedure Remove the katello-ca-consumer package if it is installed: Install the katello-ca-consumer package from the load balancer: Register the client and include the --serverurl and --baseurl options: | [
"subscription-manager config --rhsm.baseurl=https://loadbalancer.example.com/pulp/content --server.hostname=loadbalancer.example.com",
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \"",
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \" --insecure true",
"subscription-manager config --rhsm.baseurl=https://loadbalancer.example.com/pulp/content --server.hostname=loadbalancer.example.com",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"] }}'",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"], \"insecure\": true }}'",
"subscription-manager config --rhsm.baseurl=https://loadbalancer.example.com/pulp/content --server.hostname=loadbalancer.example.com",
"/usr/libexec/platform-python bootstrap.py --login= admin --server loadbalancer.example.com --organization=\" My_Organization \" --location=\" My_Location \" --hostgroup=\" My_Hostgroup \" --activationkey=\" My_Activation_Key \" --enablerepos=satellite-client-6-for-rhel-8-<arch>-rpms --puppet-ca-port 8141 \\ 1 --force 2",
"python bootstrap.py --login= admin --server loadbalancer.example.com --organization=\" My_Organization \" --location=\" My_Location \" --hostgroup=\" My_Hostgroup \" --activationkey=\" My_Activation_Key \" --enablerepos=rhel-7-server-satellite-client-6-rpms --puppet-ca-port 8141 \\ 1 --force 2",
"yum remove 'katello-ca-consumer*'",
"rpm -Uvh http:// loadbalancer.example.com /pub/katello-ca-consumer-latest.noarch.rpm",
"subscription-manager register --activationkey= My_Activation_Key --baseurl=https:// loadbalancer.example.com /pulp/content/ --org= Your_Organization --serverurl=https:// loadbalancer.example.com /rhsm"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_capsules_with_a_load_balancer/registering-clients |
4.243. python | 4.243. python 4.243.1. RHBA-2011:1564 - python bug fix and enhancement update Updated python packages that fix several bugs and add an enhancement are now available for Red Hat Enterprise Linux 6. Python is an interpreted, interactive, object-oriented programming language. Bug Fixes BZ# 697470 The Python standard library contains numerous APIs that handle the uid_t and gid_t attributes, which contain unsigned 32-bit values. Previously, the existing code often passed the values as C language long values, which are signed 32-bit values on 32-bit architectures. Consequently, negative integer objects occurred when a uid_t/gid_t value was equal or larger than 2^31 on 32-bit architectures. With this update, the standard library has been updated throughout to accept the full range of uid_t/gid_t values (0 through 2^32-1), using "int" objects for small values, but using "long" objects where needed to avoid integer overflow. As a special case, "-1" is also supported, as this value has special meaning for the os.chown() function and other related functions. BZ# 713082 Previously, the multiprocessing module used the "select" system call to communicate with subprocesses, limiting the number of file descriptors to the value of the FD_SETSIZE variable (1024). With this update, the multiprocessing module has been ported to use the "poll" system call, instead of "select", thus fixing this bug. BZ# 685234 Previously, a race condition sometimes caused the forking.Popen.poll() method of the multiprocessing module to terminate with the "OSError: [Errno 10] No child processes" error message when starting subprocesses. This bug has been fixed and the crashes no longer occur in the described scenario. BZ# 689794 Previously, the getpass.getpass() method discarded Ctrl-C and Ctrl-Z input, requiring the user to press Ctrl-D to exit the password entry prompt and then returning traceback error messages. With this update, the described user input is processed properly by the getpass.getpass() method. BZ# 699740 Due to a bug, the readline.get_history_length() and readline.get_history_item() methods leaked memory when executed. This bug has been fixed and no longer occurs. BZ# 727364 When building the C extension modules, if a value for the CFLAGS variable is defined in the environment, it is appended to the compilation flags from Python's Makefile. Due to a bug, only flags stored in the OPT variable were supplied from the Makefile. Consequently, the "-fno-strict-aliasing" flag was missing and build errors occurred. This bug has been fixed, CFLAGS are properly appended to the original Python build string, and no build errors are now returned in the described scenario. BZ# 667431 When feeding data to the standard input of short-lived processes, the subprocess.Popen.communicate() method sometimes terminated with the "OSError: [Errno 32] Broken pipe" error message. This bug has been fixed and the crashes no longer occur in the described scenario. Enhancement BZ# 711818 The gdb (GNU Debugger) Python hooks for debugging Python itself (via the python-debuginfo package) have been enhanced. The hooks now report if a thread is waiting on a lock, such as the GIL (Global Interpreter Lock), and call to appropriate C functions, methods, and garbage collections. In addition, the hooks have been optimized to provide at least file and function names, when line numbers and locals are not available. All users of python are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. 4.243.2. RHSA-2012:0744 - Moderate: python security update Updated python packages that fix multiple security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Python is an interpreted, interactive, object-oriented programming language. Security Fixes CVE-2012-1150 A denial of service flaw was found in the implementation of associative arrays (dictionaries) in Python. An attacker able to supply a large number of inputs to a Python application (such as HTTP POST request parameters sent to a web application) that are used as keys when inserting data into an array could trigger multiple hash function collisions, making array operations take an excessive amount of CPU time. To mitigate this issue, randomization has been added to the hash function to reduce the chance of an attacker successfully causing intentional collisions. () Note The hash randomization is not enabled by default as it may break applications that incorrectly depend on dictionary ordering. To enable the protection, the new "PYTHONHASHSEED" environment variable or the Python interpreter's "-R" command line option can be used. Refer to the python(1) manual page for details. The RHSA-2012:0731 expat erratum must be installed with this update, which adds hash randomization to the Expat library used by the Python pyexpat module. CVE-2012-0845 A flaw was found in the way the Python SimpleXMLRPCServer module handled clients disconnecting prematurely. A remote attacker could use this flaw to cause excessive CPU consumption on a server using SimpleXMLRPCServer. CVE-2011-4940 A flaw was found in the way the Python SimpleHTTPServer module generated directory listings. An attacker able to upload a file with a specially-crafted name to a server could possibly perform a cross-site scripting (XSS) attack against victims visiting a listing page generated by SimpleHTTPServer, for a directory containing the crafted file (if the victims were using certain web browsers). CVE-2011-4944 A race condition was found in the way the Python distutils module set file permissions during the creation of the .pypirc file. If a local user had access to the home directory of another user who is running distutils, they could use this flaw to gain access to that user's .pypirc file, which can contain usernames and passwords for code repositories. Red Hat would like to thank oCERT for reporting CVE-2012-1150. oCERT acknowledges Julian Walde and Alexander Klink as the original reporters of CVE-2012-1150. All Python users should upgrade to these updated packages, which contain backported patches to correct these issues. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/python |
Chapter 1. Configuring your Red Hat build of Quarkus applications by using a properties file | Chapter 1. Configuring your Red Hat build of Quarkus applications by using a properties file As an application developer, you can use Red Hat build of Quarkus to create microservices-based applications written in Java that run on OpenShift and serverless environments. Applications compiled to native executables have small memory footprints and fast startup times. You can configure your Quarkus application by using either of the following methods: Setting properties in the application.properties file Applying structured configuration in YAML format by updating the application.yaml file You can also extend and customize the configuration for your application by doing the following: Substituting and composing configuration property values by using property expressions Implementing MicroProfile-compliant classes with custom configuration source converters that read configuration values from different external sources Using configuration profiles to maintain separate sets of configuration values for your development, test, and production environments The procedures include configuration examples that are created by using the Quarkus config-quickstart exercise. Prerequisites You have installed OpenJDK 11 or 17 and set the JAVA_HOME environment variable to specify the location of the Java SDK. To download the Red Hat build of OpenJDK, log in to the Red Hat Customer Portal and go to Software Downloads . You have installed Apache Maven 3.8.6 or later. Download Maven from the Apache Maven Project website. You have configured Maven to use artifacts from the Quarkus Maven repository . To learn how to configure Maven settings, see Getting started with Quarkus . 1.1. Red Hat configuration options Configuration options enable you to change the settings of your application in a single configuration file. Red Hat build of Quarkus supports configuration profiles that you can use to group related properties and switch between profiles as required. By default, Quarkus reads properties from the application.properties file located in the src/main/resources directory. You can also configure Quarkus to read properties from a YAML file instead. When you add the quarkus-config-yaml dependency to your project pom.xml file, you can configure and manage your application properties in the application.yaml file. For more information, see Adding YAML configuration support . Red Hat build of Quarkus also supports MicroProfile Config, which enables you to load the configuration of your application from other sources. You can use the MicroProfile Config specification from the Eclipse MicroProfile project to inject configuration properties into your application and access them using a method defined in your code. Quarkus can also read application properties from different origins, including the following sources: The file system A database A Kubernetes or OpenShift Container Platform ConfigMap or Secret object Any source that can be loaded by a Java application 1.2. Creating the configuration quickstart project With the config-quickstart project, you can get up and running with a simple Quarkus application by using Apache Maven and the Quarkus Maven plugin. The following procedure describes how you can create a Quarkus Maven project. Prerequisites You have installed OpenJDK 11 or 17 and set the JAVA_HOME environment variable to specify the location of the Java SDK. To download Red Hat build of OpenJDK, log in to the Red Hat Customer Portal and go to Software Downloads . You have installed Apache Maven 3.8.6 or later. Download Maven from the Apache Maven Project website. Procedure Verify that Maven is using OpenJDK 11 or 17 and that the Maven version is 3.8.6 or later: mvn --version If the mvn command does not return OpenJDK 11 or 17, ensure that the directory where OpenJDK 11 or 17 is installed on your system is included in the PATH environment variable: export PATH=USDPATH:<path_to_JDK> Enter the following command to generate the project: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:create \ -DprojectGroupId=org.acme \ -DprojectArtifactId=config-quickstart \ -DplatformGroupId=com.redhat.quarkus.platform \ -DplatformVersion=3.2.12.SP1-redhat-00003 \ -DclassName="org.acme.config.GreetingResource" \ -Dpath="/greeting" cd config-quickstart Verification The preceding mvn command creates the following items in the config-quickstart directory: The Maven project directory structure An org.acme.config.GreetingResource resource A landing page that you can access at http://localhost:8080 after you start the application Associated unit tests for testing your application in native mode and JVM mode Example Dockerfile.jvm and Dockerfile.native files in the src/main/docker subdirectory The application configuration file Note Alternatively, you can download a Quarkus Maven project to use in this tutorial from the Quarkus quickstart archive or clone the Quarkus Quickstarts Git repository. The Quarkus config-quickstart exercise is located in the config-quickstart directory. 1.3. Injecting configuration values into your Red Hat build of Quarkus application Red Hat build of Quarkus uses the MicroProfile Config feature to inject configuration data into the application. You can access the configuration by using context and dependency injection (CDI) or by defining a method in your code. Use the @ConfigProperty annotation to map an object property to a key in the MicroProfile Config Sources file of your application. The following procedure and examples show how you can inject an individual property configuration into a Quarkus config-quickstart project by using the Red Hat build of Quarkus Application configuration file, application.properties . Note You can use a MicroProfile Config configuration file , src/main/resources/META-INF/microprofile-config.properties , in exactly the same way you use the application.properties file. Using the application.properties file is the preferred method. Prerequisites You have created the Quarkus config-quickstart project. Procedure Open the src/main/resources/application.properties file. Add configuration properties to your configuration file where <property_name> is the property name and <value> is the value of the property: <property_name>=<value> The following example shows how to set the values for the greeting.message and the greeting.name properties in the Quarkus config-quickstart project: Example application.properties file greeting.message = hello greeting.name = quarkus Important When you are configuring your applications, do not prefix application-specific properties with the string quarkus . The quarkus prefix is reserved for configuring Quarkus at the framework level. Using quarkus as a prefix for application-specific properties might lead to unexpected results when your application runs. Review the GreetingResource.java Java file in your project. The file contains the GreetingResource class with the hello() method that returns a message when you send an HTTP request on the /greeting endpoint: Example GreetingResource.java file import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path("/greeting") public class GreetingResource { String message; Optional<String> name; String suffix; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + " " + name.orElse("world") + suffix; } } In the example provided, the values of the message and name strings in the hello() method are not initialized. The application throws a NullPointerException when the endpoint is called and starts successfully in this state. Define the message , name , and suffix fields, and annotate them with @ConfigProperty , matching the values that you defined for the greeting.message and greeting.name properties. Use the @ConfigProperty annotation to inject the configuration value for each string. For example: Example GreetingResource.java file import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path("/greeting") public class GreetingResource { @ConfigProperty(name = "greeting.message") 1 String message; @ConfigProperty(name = "greeting.suffix", defaultValue="!") 2 String suffix; @ConfigProperty(name = "greeting.name") Optional<String> name; 3 @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + " " + name.orElse("world") + suffix; } } 1 If you do not configure a value for the greeting.message string, the application fails and throws the following exception: jakarta.enterprise.inject.spi.DeploymentException: io.quarkus.runtime.configuration.ConfigurationException: Failed to load config value of type class java.lang.String for: greeting.message 2 If you do not configure a value for the greeting.suffix , Quarkus resolves it to the default value. 3 If you do not define the greeting.name property, the value of name is not available. Your application still runs even when this value is not available because you set the Optional parameter on name . Note To inject a configured value, you can use @ConfigProperty . You do not need to include the @Inject annotation for members that you annotate with @ConfigProperty . Compile and start your application in development mode: ./mvnw quarkus:dev Enter the following command in a new terminal window to verify that the endpoint returns the message: curl http://localhost:8080/greeting This command returns the following output: hello quarkus! To stop the application, press Ctrl+C. 1.4. Updating the functional test to validate configuration changes Before you test the functionality of your application, you must update the functional test to reflect the changes that you made to the endpoint of your application. The following procedure shows how you can update your testHelloEndpoint method on the Quarkus config-quickstart project. Procedure Open the GreetingResourceTest.java file. Update the content of the testHelloEndpoint method: package org.acme.config; import io.quarkus.test.junit.QuarkusTest; import org.junit.jupiter.api.Test; import static io.restassured.RestAssured.given; import static org.hamcrest.CoreMatchers.is; @QuarkusTest public class GreetingResourceTest { @Test public void testHelloEndpoint() { given() .when().get("/greeting") .then() .statusCode(200) .body(is("hello quarkus!")); // Modified line } } 1.5. Setting configuration properties By default, Quarkus reads properties from the application.properties file that is in the src/main/resources directory. If you change build properties, ensure that you repackage your application. Quarkus configures most properties during build time. Extensions can define properties as overridable at runtime, for example, the database URL, a user name, and a password, which can be specific to your target environment. Prerequisites You have a Quarkus Maven project. Procedure To package your Quarkus project, enter the following command: ./mvnw clean package Use one of the following methods to set the configuration properties: Setting system properties: Enter the following command, where <property_name> is the name of the configuration property that you want to add and <value> is the value of the property: java -D<property_name>=<value> -jar target/myapp-runner.jar For example, to set the value of the quarkus.datasource.password property, enter the following command: java -Dquarkus.datasource.password=youshallnotpass -jar target/myapp-runner.jar Setting environment variables: Enter the following command, where <property_name> is the name of the configuration property that you want to set and <value> is the value of the property: export <property_name>=<value> ; java -jar target/myapp-runner.jar Note Environment variable names follow the conversion rules of Eclipse MicroProfile . Convert the name to upper case and replace any character that is not alphanumeric with an underscore ( _ ). Using an environment file: Create a .env file in your current working directory and add configuration properties, where <PROPERTY_NAME> is the property name and <value> is the value of the property: <PROPERTY_NAME>=<value> Note In development mode, this file is in the root directory of your project. Do not track the file in version control. If you create a .env file in the root directory of your project, you can define keys and values that the program reads as properties. Using the application.properties file: Place the configuration file in the USDPWD/config/application.properties directory where the application runs so that any runtime properties that are defined in that file override the default configuration. Note You can also use the config/application.properties features in development mode. Place the config/application.properties file inside the target directory. Any cleaning operation from the build tool, for example, mvn clean , also removes the config directory. 1.6. Advanced Configuration Mapping The following advanced mapping procedures are extensions that are specific to Red Hat build of Quarkus and are outside of the MicroProfile Config specification. 1.6.1. Annotating an interface with @ConfigMapping Instead of individually injecting multiple related configuration values, use the @io.smallrye.config.ConfigMapping annotation to group configuration properties. The following procedure describes how you can use the @ConfigMapping annotation on the Quarkus config-quickstart project. Prerequisites You have created the Quarkus config-quickstart project. You have defined the greeting.message and greeting.name properties in the application.properties file of your project. Procedure Review the GreetingResource.java file in your project and ensure that it contains the contents that are shown in the following example. To use the @ConfigPoperties annotation to inject configuration properties from another configuration source into this class, you must import the java.util.Optional and org.eclipse.microprofile.config.inject.ConfigProperty packages. Example GreetingResource.java file import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path("/greeting") public class GreetingResource { @ConfigProperty(name = "greeting.message") String message; @ConfigProperty(name = "greeting.suffix", defaultValue="!") String suffix; @ConfigProperty(name = "greeting.name") Optional<String> name; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + " " + name.orElse("world") + suffix; } } Create a GreetingConfiguration.java file in the src/main/java/org/acme/config directory. Add the import statements for ConfigMapping and Optional to the file: Example GreetingConfiguration.java file import io.smallrye.config.ConfigMapping; import io.smallrye.config.WithDefault; import java.util.Optional; @ConfigMapping(prefix = "greeting") 1 public interface GreetingConfiguration { String message(); @WithDefault("!") 2 String suffix(); Optional<String> name(); } 1 The prefix property is optional. For example, in this scenario, the prefix is greeting . 2 If greeting.suffix is not set, ! is used as the default value. Inject the GreetingConfiguration instance into the GreetingResource class by using the @Inject annotation, as follows: Note This snippet replaces the three fields that are annotated with @ConfigProperty that are in the initial version of the config-quickstart project. Example GreetingResource.java file @Path("/greeting") public class GreetingResource { @Inject GreetingConfiguration config; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return config.message() + " " + config.name().orElse("world") + config.suffix(); } } Compile and start your application in development mode: ./mvnw quarkus:dev Important If you do not provide values for the class properties, the application fails to compile, and an io.smallrye.config.ConfigValidationException error is returned to indicate that a value is missing. This does not apply to optional fields or fields with a default value. To verify that the endpoint returns the message, enter the following command in a new terminal window: curl http://localhost:8080/greeting You receive the following message: hello quarkus! To stop the application, press Ctrl+C. 1.6.2. Using nested object configuration You can define an interface that is nested inside another interface. This procedure shows how to create and configure a nested interface in the Quarkus config-quickstart project. Prerequisites You have created the Quarkus config-quickstart project. You have defined the greeting.message and greeting.name properties in the application.properties file of your project. Procedure Review the GreetingResource.java in your project. The file contains the GreetingResource class with the hello() method that returns a message when you send an HTTP request on the /greeting endpoint: Example GreetingResource.java file import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path("/greeting") public class GreetingResource { @ConfigProperty(name = "greeting.message") String message; @ConfigProperty(name = "greeting.suffix", defaultValue="!") String suffix; @ConfigProperty(name = "greeting.name") Optional<String> name; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + " " + name.orElse("world") + suffix; } } Create a GreetingConfiguration.java class file with the GreetingConfiguration instance. This class contains the externalized configuration for the hello() method that is defined in the GreetingResource class: Example GreetingConfiguration.java file import io.smallrye.config.ConfigMapping; import io.smallrye.config.WithDefault; import java.util.Optional; @ConfigMapping(prefix = "greeting") public interface GreetingConfiguration { String message(); @WithDefault("!") String suffix(); Optional<String> name(); } Create the ContentConfig class that is nested inside the GreetingConfiguration instance, as shown in the following example: Example GreetingConfiguration.java file import io.smallrye.config.ConfigMapping; import io.smallrye.config.WithDefault; import java.util.List; import java.util.Optional; @ConfigMapping(prefix = "greeting") public interface GreetingConfiguration { String message(); @WithDefault("!") String suffix(); Optional<String> name(); ContentConfig content(); interface ContentConfig { Integer prizeAmount(); List<String> recipients(); } } Note The method name of the ContentConfig class is content . To ensure that you bind the properties to the correct interface, when you define configuration properties for this class, use content in the prefix. In doing so, you can also prevent property name conflicts and unexpected application behavior. Define the greeting.content.prize-amount and greeting.content.recipients configuration properties in your application.properties file. The following example shows the properties defined for the GreetingConfiguration instance and the ContentConfig classes: Example application.properties file greeting.message = hello greeting.name = quarkus greeting.content.prize-amount=10 greeting.content.recipients=Jane,John Instead of the three @ConfigProperty field annotations, inject the GreetingConfiguration instance into the GreetingResource class by using the @Inject annotation, as outlined in the following example. Also, you must update the message string that the /greeting endpoint returns with the values that you set for the new greeting.content.prize-amount and greeting.content.recipients properties that you added. Example GreetingResource.java file import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import jakarta.inject.Inject; @Path("/greeting") public class GreetingResource { @Inject GreetingConfiguration config; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return config.message() + " " + config.name().orElse("world") + config.suffix() + "\n" + config.content().recipients() + " receive total of candies: " + config.content().prizeAmount(); } } Compile and start your application in development mode: ./mvnw quarkus:dev Important If you do not provide values for the class properties, the application fails to compile and you receive a jakarta.enterprise.inject.spi.DeploymentException exception that indicates a missing value. This does not apply to Optional fields and fields with a default value. To verify that the endpoint returns the message, open a new terminal window and enter the following command: curl http://localhost:8080/greeting A message displays, containing two lines of output. The first line displays the greeting, and the second line reports the recipients of the prize together with the prize amount, as follows: hello quarkus! Jane,John receive total of candies: 10 To stop the application, press Ctrl+C. Note You can annotate classes that are annotated with @ConfigMapping with Bean Validation annotations similar to the following example: @ConfigMapping(prefix = "greeting") public class GreetingConfiguration { @Size(min = 20) public String message; public String suffix = "!"; } Your project must include the quarkus-hibernate-validator dependency. 1.7. Accessing the configuration programmatically You can define a method in your code to retrieve the values of the configuration properties in your application. In doing so, you can dynamically look up configuration property values or retrieve configuration property values from classes that are either CDI beans or Jakarta REST (formerly known as JAX-RS) resources. You can access the configuration by using the org.eclipse.microprofile.config.ConfigProvider.getConfig() method. The getValue() method of the config object returns the values of the configuration properties. Prerequisites You have a Quarkus Maven project. Procedure Use a method to access the value of a configuration property of any class or object in your application code. Depending on whether or not the value that you want to retrieve is set in a configuration source in your project, you can use one of the following methods: To access the value of a property that is set in a configuration source in your project, for example, in the application.properties file, use the getValue() method: String <variable-name> = ConfigProvider.getConfig().getValue(" <property-name> ", <data-type-class-name> .class); For example, to retrieve the value of the greeting.message property that has the data type String , and is assigned to the message variable in your code, use the following syntax: String message = config.getValue("greeting.message", String.class); When you want to retrieve a value that is optional or default and might not be defined in your application.properties file or another configuration source in your application, use the getOptionalValue() method: String <variable-name> = ConfigProvider.getConfig().getOptionalValue("<property-name>", <data-type-class-name>.class); For example, to retrieve the value of the greeting.name property that is optional, has the data type String , and is assigned to the name variable in your code, use the following syntax: Optional<String> name = config.getOptionalValue("greeting.name", String.class); The following snippet shows a variant of the aforementioned GreetingResource class by using the programmatic access to the configuration: src/main/java/org/acme/config/GreetingResource.java package org.acme.config; import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.Config; import org.eclipse.microprofile.config.ConfigProvider; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path("/greeting") public class GreetingResource { @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { Config config = ConfigProvider.getConfig(); String message = config.getValue("greeting.message", String.class); String suffix = config.getOptionalValue("greeting.suffix", String.class).orElse("!"); Optional<String> name = config.getOptionalValue("greeting.name", String.class); return message + " " + name.orElse("world") + suffix; } } 1.8. Property expressions Property expressions are combinations of property references and plain text strings that you can use to substitute values of properties in your configuration. Much like a variable, you can use a property expression in Quarkus to substitute a value of a configuration property instead of hardcoding it. A property expression is resolved when java.util.Properties reads the value of the property from a configuration source in your application. This means that if a configuration property is read from your configuration at compile time, the property expression is also resolved at compile time. If the configuration property is overridden at runtime, its value is resolved at runtime. You can resolve property expressions by using more than one configuration source. This means that you can use a value of a property that is defined in one configuration source to expand a property expression that you use in another configuration source. If the value of a property in an expression cannot be resolved, and you do not set a default value for the expression, your application encounters a NoSuchElementException . 1.8.1. Example usage of property expressions To achieve flexibility when you configure your Quarkus application, you can use property expressions as shown in the following examples. Substituting the value of a configuration property: To avoid hardcoding property values in your configuration, you can use a property expression. Use the USD{<property_name>} syntax to write an expression that references a configuration property, as shown in the following example: Example application.properties file remote.host=quarkus.io callable.url=https://USD{remote.host}/ The value of the callable.url property resolves to https://quarkus.io/ . Setting a property value that is specific to a particular configuration profile: In the following example, the %dev configuration profile and the default configuration profile are set to use data source connection URLs with different host addresses. Example application.properties file %dev.quarkus.datasource.jdbc.url=jdbc:mysql://localhost:3306/mydatabase?useSSL=false quarkus.datasource.jdbc.url=jdbc:mysql://remotehost:3306/mydatabase?useSSL=false Depending on the configuration profile used to start your application, your data source driver uses the database URL that you set for the profile. You can achieve the same result in a simplified way by setting a different value for the custom application.server property for each configuration profile. Then, you can reference the property in the database connection URL of your application, as shown in the following example: Example application.properties file %dev.application.server=localhost application.server=remotehost quarkus.datasource.jdbc.url=jdbc:mysql://USD{application.server}:3306/mydatabase?useSSL=false The application.server property resolves to the appropriate value depending on the profile that you choose when you run your application. Setting a default value of a property expression: You can define a default value for a property expression. Quarkus uses the default value if the value of the property that is required to expand the expression is not resolved from any of your configuration sources. You can set a default value for an expression by using the following syntax: In the following example, the property expression in the data source URL uses mysql.db.server as the default value of the application.server property: Example application.properties file quarkus.datasource.jdbc.url=jdbc:mysql://USD{application.server:mysql.db.server}:3306/mydatabase?useSSL=false Nesting property expressions: You can compose property expressions by nesting a property expression inside another property expression. When nested property expressions are expanded, the inner expression is expanded first. You can use the following syntax for nesting property expressions: Combining multiple property expressions: You can join two or more property expressions together by using the following syntax: Combining property expressions with environment variables: You can use property expressions to substitute the values of environment variables. The expression in the following example substitutes the value that is set for the HOST environment variable as the value of the application.host property: Example application.properties file remote.host=quarkus.io application.host=USD{HOST:USD{remote.host}} When the HOST environment variable is not set, the application.host property uses the value of the remote.host property as the default. 1.9. Using configuration profiles You can use different configuration profiles depending on your environment. Configuration profiles enable you to have multiple configurations in the same file and to select between them by using a profile name. Red Hat build of Quarkus has the following three default configuration profiles: dev : Activated in development mode test : Activated when running tests prod : The default profile when not running in development or test mode Note In addition, you can create your own custom profiles. Prerequisites You have a Quarkus Maven project. Procedure Open your Java resource file and add the following import statement: import io.quarkus.runtime.configuration.ProfileManager; To display the current configuration profile, add a log by invoking the ProfileManager.getActiveProfile() method: LOGGER.infof("The application is starting with profile `%s`", ProfileManager.getActiveProfile()); Note You cannot access the current profile by using the @ConfigProperty("quarkus.profile") method. 1.9.1. Setting a custom configuration profile You can create as many configuration profiles as you want. You can have multiple configurations in the same file and you can select a configuration by using a profile name. Procedure To set a custom profile, create a configuration property with the profile name in the application.properties file, where <property_name> is the name of the property, <value> is the property value, and <profile> is the name of a profile: Create a configuration property %<profile>.<property_name>=<value> In the following example configuration, the value of quarkus.http.port is 9090 by default, and becomes 8181 when the dev profile is activated: Example configuration quarkus.http.port=9090 %dev.quarkus.http.port=8181 Use one of the following methods to enable a profile: Set the quarkus.profile system property. To enable a profile using the quarkus.profile system property, enter the following command: Enable a profile using quarkus.profile property mvn -Dquarkus.profile=<value> quarkus:dev Set the QUARKUS_PROFILE environment variable. To enable profile using an environment variable, enter the following command: Enable a profile using an environment variable export QUARKUS_PROFILE=<profile> Note The system property value takes precedence over the environment variable value. To repackage the application and change the profile, enter the following command: Change a profile ./mvnw package -Dquarkus.profile=<profile> java -jar target/myapp-runner.jar The following example shows a command that activates the prod-aws profile: Example command to activate a profile ./mvnw package -Dquarkus.profile=prod-aws java -jar target/myapp-runner.jar Note The default Quarkus application runtime profile is set to the profile that is used to build the application. Red Hat build of Quarkus automatically selects a profile depending on your environment mode. For example, when your application is running as a JAR, Quarkus is in prod mode. 1.10. Setting custom configuration sources By default, a Quarkus application reads properties from the application.properties file in the src/main/resources subdirectory of your project. Quarkus also allows you to load application configuration properties from other sources according to the MicroProfile Config specification for externalized configuration. You can enable your application to load configuration properties from other sources by defining classes that implement the org.eclipse.microprofile.config.spi.ConfigSource and the org.eclipse.microprofile.config.spi.ConfigSourceProvider interfaces. This procedure demonstrates how you can implement a custom configuration source in your Quarkus project. Prerequisite You have the Quarkus config-quickstart project. Procedure Create a class file in your project that implements the org.eclipse.microprofile.config.spi.ConfigSourceProvider interface. To return a list of ConfigSource objects, you must override the getConfigSources() method. Example org.acme.config.InMemoryConfigSourceProvider package org.acme.config; import org.eclipse.microprofile.config.spi.ConfigSource; import org.eclipse.microprofile.config.spi.ConfigSourceProvider; import java.util.List; public class InMemoryConfigSourceProvider implements ConfigSourceProvider { @Override public Iterable<ConfigSource> getConfigSources(ClassLoader classLoader) { return List.of(new InMemoryConfigSource()); } } Create the InMemoryConfigSource class that implements the org.eclipse.microprofile.config.spi.ConfigSource interface: Example org.acme.config.InMemoryConfigSource package org.acme.config; import org.eclipse.microprofile.config.spi.ConfigSource; import java.util.HashMap; import java.util.Map; import java.util.Set; public class InMemoryConfigSource implements ConfigSource { private static final Map<String, String> configuration = new HashMap<>(); static { configuration.put("my.prop", "1234"); } @Override public int getOrdinal() { 1 return 275; } @Override public Set<String> getPropertyNames() { return configuration.keySet(); } @Override public String getValue(final String propertyName) { return configuration.get(propertyName); } @Override public String getName() { return InMemoryConfigSource.class.getSimpleName(); } } 1 The getOrdinal() method returns the priority of the ConfigSource class. Therefore, when multiple configuration sources define the same property, Quarkus can select the appropriate value as defined by the ConfigSource class with the highest priority. In the src/main/resources/META-INF/services/ subdirectory of your project, create a file named org.eclipse.microprofile.config.spi.ConfigSourceProvider and enter the fully-qualified name of the class that implements the ConfigSourceProvider in the file that you created: Example org.eclipse.microprofile.config.spi.ConfigSourceProvider file: org.acme.config.InMemoryConfigSourceProvider To ensure that the ConfigSourceProvider that you created is registered and installed when you compile and start your application, you must complete the step. Edit the GreetingResource.java file in your project to add the following update: @ConfigProperty(name="my.prop") int value; In the GreetingResource.java file, extend the hello method to use the new property: @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + " " + name.orElse("world") + " " + value; } To compile and start your application in development mode, enter the following command: ./mvnw quarkus:dev To verify that the /greeting endpoint returns the expected message, open a terminal window and enter the following command: Example request curl http://localhost:8080/greeting When your application successfully reads the custom configuration, the command returns the following response: hello world 1234 1.11. Using custom configuration converters as configuration values You can store custom types as configuration values by implementing org.eclipse.microprofile.config.spi.Converter<T> and adding its fully qualified class name into the META-INF/services/org.eclipse.microprofile.config.spi.Converter file. By using converters, you can transform the string representation of a value into an object. Prerequisites You have created the Quarkus config-quickstart project. Procedure In the org.acme.config package , create the org.acme.config.MyCustomValue class with the following content: Example of custom configuration value package org.acme.config; public class MyCustomValue { private final int value; public MyCustomValue(Integer value) { this.value = value; } public int value() { return value; } } Implement the converter class to override the convert method to produce a MyCustomValue instance. Example implementation of converter class package org.acme.config; import org.eclipse.microprofile.config.spi.Converter; public class MyCustomValueConverter implements Converter<MyCustomValue> { @Override public MyCustomValue convert(String value) { return new MyCustomValue(Integer.valueOf(value)); } } Include the fully-qualified class name of the converter in your META-INF/services/org.eclipse.microprofile.config.spi.Converter service file, as shown in the following example: Example org.eclipse.microprofile.config.spi.Converter file org.acme.config.MyCustomValueConverter org.acme.config.SomeOtherConverter org.acme.config.YetAnotherConverter In the GreetingResource.java file, inject the MyCustomValue property: @ConfigProperty(name="custom") MyCustomValue value; Edit the hello method to use this value: @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + " " + name.orElse("world") + " - " + value.value(); } In the application.properties file, add the string representation to be converted: custom=1234 To compile and start your application in development mode, enter the following command: ./mvnw quarkus:dev To verify that the /greeting endpoint returns the expected message, open a terminal window and enter the following command: Example request curl http://localhost:8080/greeting When your application successfully reads the custom configuration, the command returns the following response: hello world - 1234 Note Your custom converter class must be public and must have a public no-argument constructor. Your custom converter class cannot be abstract . Additional resources: List of converters in the microprofile-config GitHub repository 1.11.1. Setting custom converters priority The default priority for all Quarkus core converters is 200. For all other converters, the default priority is 100. You can increase the priority of your custom converters by using the jakarta.annotation.Priority annotation. The following procedure demonstrates an implementation of a custom converter, AnotherCustomValueConverter , which has a priority of 150. This takes precedence over MyCustomValueConverter from the section, which has a default priority of 100. Prerequisites You have created the Quarkus config-quickstart project. You have created a custom configuration converter for your application. Procedure Set a priority for your custom converter by annotating the class with the @Priority annotation and passing it a priority value. In the following example, the priority value is set to 150 . Example AnotherCustomValueConverter.java file package org.acme.config; import jakarta.annotation.Priority; import org.eclipse.microprofile.config.spi.Converter; @Priority(150) public class AnotherCustomValueConverter implements Converter<MyCustomValue> { @Override public MyCustomValue convert(String value) { return new MyCustomValue(Integer.valueOf(value)); } } Create a file named org.eclipse.microprofile.config.spi.Converter in the src/main/resources/META-INF/services/ subdirectory of your project, and enter the fully qualified name of the class that implements the Converter in the file that you created: Example org.eclipse.microprofile.config.spi.Converter file org.acme.config.AnotherCustomValueConverter You must complete the step to ensure that the Converter you created is registered and installed when you compile and start your application. Verification After you complete the required configuration, the step is to compile and package your Quarkus application. For more information and examples, see the compiling and packaging sections of the Getting started with Quarkus guide. 1.12. Additional resources Developing and compiling your Quarkus applications with Apache Maven Deploying your Quarkus applications to OpenShift Container Platform Compiling your Quarkus applications to native executables Revised on 2024-10-10 17:19:03 UTC | [
"mvn --version",
"export PATH=USDPATH:<path_to_JDK>",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:create -DprojectGroupId=org.acme -DprojectArtifactId=config-quickstart -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=3.2.12.SP1-redhat-00003 -DclassName=\"org.acme.config.GreetingResource\" -Dpath=\"/greeting\" cd config-quickstart",
"<property_name>=<value>",
"greeting.message = hello greeting.name = quarkus",
"import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path(\"/greeting\") public class GreetingResource { String message; Optional<String> name; String suffix; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + \" \" + name.orElse(\"world\") + suffix; } }",
"import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path(\"/greeting\") public class GreetingResource { @ConfigProperty(name = \"greeting.message\") 1 String message; @ConfigProperty(name = \"greeting.suffix\", defaultValue=\"!\") 2 String suffix; @ConfigProperty(name = \"greeting.name\") Optional<String> name; 3 @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + \" \" + name.orElse(\"world\") + suffix; } }",
"./mvnw quarkus:dev",
"curl http://localhost:8080/greeting",
"hello quarkus!",
"package org.acme.config; import io.quarkus.test.junit.QuarkusTest; import org.junit.jupiter.api.Test; import static io.restassured.RestAssured.given; import static org.hamcrest.CoreMatchers.is; @QuarkusTest public class GreetingResourceTest { @Test public void testHelloEndpoint() { given() .when().get(\"/greeting\") .then() .statusCode(200) .body(is(\"hello quarkus!\")); // Modified line } }",
"./mvnw clean package",
"java -D<property_name>=<value> -jar target/myapp-runner.jar",
"java -Dquarkus.datasource.password=youshallnotpass -jar target/myapp-runner.jar",
"export <property_name>=<value> ; java -jar target/myapp-runner.jar",
"<PROPERTY_NAME>=<value>",
"import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path(\"/greeting\") public class GreetingResource { @ConfigProperty(name = \"greeting.message\") String message; @ConfigProperty(name = \"greeting.suffix\", defaultValue=\"!\") String suffix; @ConfigProperty(name = \"greeting.name\") Optional<String> name; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + \" \" + name.orElse(\"world\") + suffix; } }",
"import io.smallrye.config.ConfigMapping; import io.smallrye.config.WithDefault; import java.util.Optional; @ConfigMapping(prefix = \"greeting\") 1 public interface GreetingConfiguration { String message(); @WithDefault(\"!\") 2 String suffix(); Optional<String> name(); }",
"@Path(\"/greeting\") public class GreetingResource { @Inject GreetingConfiguration config; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return config.message() + \" \" + config.name().orElse(\"world\") + config.suffix(); } }",
"./mvnw quarkus:dev",
"curl http://localhost:8080/greeting",
"hello quarkus!",
"import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path(\"/greeting\") public class GreetingResource { @ConfigProperty(name = \"greeting.message\") String message; @ConfigProperty(name = \"greeting.suffix\", defaultValue=\"!\") String suffix; @ConfigProperty(name = \"greeting.name\") Optional<String> name; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + \" \" + name.orElse(\"world\") + suffix; } }",
"import io.smallrye.config.ConfigMapping; import io.smallrye.config.WithDefault; import java.util.Optional; @ConfigMapping(prefix = \"greeting\") public interface GreetingConfiguration { String message(); @WithDefault(\"!\") String suffix(); Optional<String> name(); }",
"import io.smallrye.config.ConfigMapping; import io.smallrye.config.WithDefault; import java.util.List; import java.util.Optional; @ConfigMapping(prefix = \"greeting\") public interface GreetingConfiguration { String message(); @WithDefault(\"!\") String suffix(); Optional<String> name(); ContentConfig content(); interface ContentConfig { Integer prizeAmount(); List<String> recipients(); } }",
"greeting.message = hello greeting.name = quarkus greeting.content.prize-amount=10 greeting.content.recipients=Jane,John",
"import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import jakarta.inject.Inject; @Path(\"/greeting\") public class GreetingResource { @Inject GreetingConfiguration config; @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return config.message() + \" \" + config.name().orElse(\"world\") + config.suffix() + \"\\n\" + config.content().recipients() + \" receive total of candies: \" + config.content().prizeAmount(); } }",
"./mvnw quarkus:dev",
"curl http://localhost:8080/greeting",
"hello quarkus! Jane,John receive total of candies: 10",
"@ConfigMapping(prefix = \"greeting\") public class GreetingConfiguration { @Size(min = 20) public String message; public String suffix = \"!\"; }",
"String <variable-name> = ConfigProvider.getConfig().getValue(\" <property-name> \", <data-type-class-name> .class);",
"String message = config.getValue(\"greeting.message\", String.class);",
"String <variable-name> = ConfigProvider.getConfig().getOptionalValue(\"<property-name>\", <data-type-class-name>.class);",
"Optional<String> name = config.getOptionalValue(\"greeting.name\", String.class);",
"package org.acme.config; import java.util.Optional; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.config.Config; import org.eclipse.microprofile.config.ConfigProvider; import org.eclipse.microprofile.config.inject.ConfigProperty; @Path(\"/greeting\") public class GreetingResource { @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { Config config = ConfigProvider.getConfig(); String message = config.getValue(\"greeting.message\", String.class); String suffix = config.getOptionalValue(\"greeting.suffix\", String.class).orElse(\"!\"); Optional<String> name = config.getOptionalValue(\"greeting.name\", String.class); return message + \" \" + name.orElse(\"world\") + suffix; } }",
"remote.host=quarkus.io callable.url=https://USD{remote.host}/",
"%dev.quarkus.datasource.jdbc.url=jdbc:mysql://localhost:3306/mydatabase?useSSL=false quarkus.datasource.jdbc.url=jdbc:mysql://remotehost:3306/mydatabase?useSSL=false",
"%dev.application.server=localhost application.server=remotehost quarkus.datasource.jdbc.url=jdbc:mysql://USD{application.server}:3306/mydatabase?useSSL=false",
"USD{<property_name>:<default_value>}",
"quarkus.datasource.jdbc.url=jdbc:mysql://USD{application.server:mysql.db.server}:3306/mydatabase?useSSL=false",
"USD{<outer_property_name>USD{<inner_property_name>}}",
"USD{<first_property_name>}USD{<second_property_name>}",
"remote.host=quarkus.io application.host=USD{HOST:USD{remote.host}}",
"import io.quarkus.runtime.configuration.ProfileManager;",
"LOGGER.infof(\"The application is starting with profile `%s`\", ProfileManager.getActiveProfile());",
"%<profile>.<property_name>=<value>",
"quarkus.http.port=9090 %dev.quarkus.http.port=8181",
"mvn -Dquarkus.profile=<value> quarkus:dev",
"export QUARKUS_PROFILE=<profile>",
"./mvnw package -Dquarkus.profile=<profile> java -jar target/myapp-runner.jar",
"./mvnw package -Dquarkus.profile=prod-aws java -jar target/myapp-runner.jar",
"package org.acme.config; import org.eclipse.microprofile.config.spi.ConfigSource; import org.eclipse.microprofile.config.spi.ConfigSourceProvider; import java.util.List; public class InMemoryConfigSourceProvider implements ConfigSourceProvider { @Override public Iterable<ConfigSource> getConfigSources(ClassLoader classLoader) { return List.of(new InMemoryConfigSource()); } }",
"package org.acme.config; import org.eclipse.microprofile.config.spi.ConfigSource; import java.util.HashMap; import java.util.Map; import java.util.Set; public class InMemoryConfigSource implements ConfigSource { private static final Map<String, String> configuration = new HashMap<>(); static { configuration.put(\"my.prop\", \"1234\"); } @Override public int getOrdinal() { 1 return 275; } @Override public Set<String> getPropertyNames() { return configuration.keySet(); } @Override public String getValue(final String propertyName) { return configuration.get(propertyName); } @Override public String getName() { return InMemoryConfigSource.class.getSimpleName(); } }",
"org.acme.config.InMemoryConfigSourceProvider",
"@ConfigProperty(name=\"my.prop\") int value;",
"@GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + \" \" + name.orElse(\"world\") + \" \" + value; }",
"./mvnw quarkus:dev",
"curl http://localhost:8080/greeting",
"hello world 1234",
"package org.acme.config; public class MyCustomValue { private final int value; public MyCustomValue(Integer value) { this.value = value; } public int value() { return value; } }",
"package org.acme.config; import org.eclipse.microprofile.config.spi.Converter; public class MyCustomValueConverter implements Converter<MyCustomValue> { @Override public MyCustomValue convert(String value) { return new MyCustomValue(Integer.valueOf(value)); } }",
"org.acme.config.MyCustomValueConverter org.acme.config.SomeOtherConverter org.acme.config.YetAnotherConverter",
"@ConfigProperty(name=\"custom\") MyCustomValue value;",
"@GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return message + \" \" + name.orElse(\"world\") + \" - \" + value.value(); }",
"custom=1234",
"./mvnw quarkus:dev",
"curl http://localhost:8080/greeting",
"hello world - 1234",
"package org.acme.config; import jakarta.annotation.Priority; import org.eclipse.microprofile.config.spi.Converter; @Priority(150) public class AnotherCustomValueConverter implements Converter<MyCustomValue> { @Override public MyCustomValue convert(String value) { return new MyCustomValue(Integer.valueOf(value)); } }",
"org.acme.config.AnotherCustomValueConverter"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/configuring_your_red_hat_build_of_quarkus_applications_by_using_a_yaml_file/assembly_quarkus-configuration-guide_quarkus-configuration-guide |
Chapter 17. Creating caches with Data Grid Operator | Chapter 17. Creating caches with Data Grid Operator Use Cache CRs to add cache configuration with Data Grid Operator and control how Data Grid stores your data. 17.1. Data Grid caches Cache configuration defines the characteristics and features of the data store and must be valid with the Data Grid schema. Data Grid recommends creating standalone files in XML or JSON format that define your cache configuration. You should separate Data Grid configuration from application code for easier validation and to avoid the situation where you need to maintain XML snippets in Java or some other client language. To create caches with Data Grid clusters running on OpenShift, you should: Use Cache CR as the mechanism for creating caches through the OpenShift front end. Use Batch CR to create multiple caches at a time from standalone configuration files. Access Data Grid Console and create caches in XML or JSON format. You can use Hot Rod or HTTP clients but Data Grid recommends Cache CR or Batch CR unless your specific use case requires programmatic remote cache creation. Cache CRs Cache CRs apply to Data Grid service pods only. Each Cache CR corresponds to a single cache on the Data Grid cluster. 17.2. Creating caches with the Cache CR Complete the following steps to create caches on Data Grid service clusters using valid configuration in XML or YAML format. Procedure Create a Cache CR with a unique value in the metadata.name field. Specify the target Data Grid cluster with the spec.clusterName field. Name your cache with the spec.name field. Note The name attribute in the cache configuration does not take effect. If you do not specify a name with the spec.name field then the cache uses the value of the metadata.name field. Add a cache configuration with the spec.template field. Apply the Cache CR, for example: Cache CR examples XML YAML 17.3. Updating caches with the Cache CR You can control how Data Grid Operator handles modifications to the cache configuration in the Cache CR. Data Grid Operator attempts to update the cache configuration on the Data Grid Server at runtime. If the update fails, Data Grid Operator uses one of the following strategies: retain strategy The Operator updates the status of the Cache CR to Ready=False . You can manually delete the Cache CR and create a new cache configuration. This is the default strategy. recreate strategy The Operator deletes the cache from the Data Grid cluster and creates a new cache with the latest spec.template value from the Cache CR. Important Configure the recreate strategy only if your deployment can tolerate data loss. Prerequisites Have a valid Cache CR. Procedure Use the spec.updates.strategy field to set the Cache CR strategy. mycache.yaml spec: updates: strategy: recreate Apply changes to the Cache CR, for example: 17.4. Adding persistent cache stores You can add persistent cache stores to Data Grid service pods to save data to the persistent volume. Data Grid creates a Single File cache store, .dat file, in the /opt/infinispan/server/data directory. Procedure Add the <file-store/> element to the persistence configuration in your Data Grid cache, as in the following example: <distributed-cache name="persistent-cache" mode="SYNC"> <encoding media-type="application/x-protostream"/> <persistence> <file-store/> </persistence> </distributed-cache> | [
"apply -f mycache.yaml cache.infinispan.org/mycachedefinition created",
"apiVersion: infinispan.org/v2alpha1 kind: Cache metadata: name: mycachedefinition spec: clusterName: infinispan name: myXMLcache template: <distributed-cache mode=\"SYNC\" statistics=\"true\"><encoding media-type=\"application/x-protostream\"/><persistence><file-store/></persistence></distributed-cache>",
"apiVersion: infinispan.org/v2alpha1 kind: Cache metadata: name: mycachedefinition spec: clusterName: infinispan name: myYAMLcache template: |- distributedCache: mode: \"SYNC\" owners: \"2\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" persistence: fileStore: ~",
"spec: updates: strategy: recreate",
"apply -f mycache.yaml",
"<distributed-cache name=\"persistent-cache\" mode=\"SYNC\"> <encoding media-type=\"application/x-protostream\"/> <persistence> <file-store/> </persistence> </distributed-cache>"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_guide/creating-caches |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/amq_clients_2.10_release_notes/making-open-source-more-inclusive |
Chapter 1. Support policy for Eclipse Temurin | Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.342_and_8.0.345_release_notes/openjdk8-temurin-support-policy |
34.4. Setting up a Kerberos-aware NFS Client | 34.4. Setting up a Kerberos-aware NFS Client If the NFS clients supports only weak cryptography, such as a Red Hat Enterprise Linux 5 client, set the following entry in the /etc/krb5.conf file of the server to allow weak cryptography: If the NFS client is not enrolled as a client in the IdM domain, set up the required host entries, as described in Section 12.3, "Adding Host Entries" . Install the nfs-utils package: Obtain a Kerberos ticket before running IdM tools. Run the ipa-client-automount utility to configure the NFS settings: By default, this enables secure NFS in the /etc/sysconfig/nfs file and sets the IdM DNS domain in the Domain parameter in the /etc/idmapd.conf file. Configure the services to start automatically when the system boots: Add the following entries to the /etc/fstab file to mount the NFS shares from the nfs-server.example.com host when the system boots: These settings configure Red Hat Enterprise Linux to mount the /export share to the /mnt and the /home share to the /home directory. Create the mount points if they do not exist: Mount the NFS shares: The command uses the information from the /etc/fstab entry. Configure SSSD to renew Kerberos tickets: Set the following parameters in the IdM domain section of the /etc/sssd/sssd.conf file to configure SSSD to automatically renew tickets: Restart SSSD: Important The pam_oddjob_mkhomedir module does not support automatic creation of home directories on an NFS share. Therefore, you must manually create the home directories on the server in the root of the share that contains the home directories. | [
"allow_weak_crypto = true",
"yum install nfs-utils",
"kinit admin",
"[root@nfs-client ~] ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/sysconfig/nfs Configured /etc/idmapd.conf Started rpcidmapd Started rpcgssd Restarting sssd, waiting for it to become available. Started autofs",
"systemctl enable rpc-gssd.service systemctl enable rpcbind.service",
"nfs-server.example.com:/export /mnt nfs4 sec=krb5p,rw nfs-server.example.com:/home /home nfs4 sec=krb5p,rw",
"mkdir -p /mnt/ mkdir -p /home",
"mount /mnt/ mount /home",
"[domain/EXAMPLE.COM] krb5_renewable_lifetime = 50d krb5_renew_interval = 3600",
"systemctl restart sssd"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/krb-nfs-client |
Chapter 1. About two-factor authentication (2FA) for Red Hat user accounts | Chapter 1. About two-factor authentication (2FA) for Red Hat user accounts Red Hat allows users to enable two-factor authentication as an additional layer of security for logging in to their Red Hat user accounts. When two-factor authentication is enabled, you use your password plus a one-time code to log in to your account. The one-time code is the second authentication factor. The two-factor authentication feature is available to customers in one of two ways: Organizational two-factor authentication. When your organization enables two-factor authentication, all users who belong to a specific organization account will be required to use a second factor each time they authenticate. Users will be prompted to enable two-factor authentication upon the first log in attempt after the organization account is enrolled. Individual opt-in two-factor authentication. Individual users can enable or disable two-factor authentication for their Red Hat account. When organizational two-factor authentication is turned on, individual users cannot disable it. Note The current implementation of two-factor authentication only applies to applications using a browser-based authentication flow. It does not apply to command line authentication flows. 1.1. Organizational two-factor authentication The Organization Administrator for an account can enable organization-wide two-factor authentication. When enabled, all users on that account must use two-factor authentication for authentication when they log in. See Section 2.1, "Configuring organization-wide authentication factors" . 1.2. Two-factor authentication and token support Token support for two-factor authentication is limited to smartphones or other devices that can install either of the following applications from Apple App Store or Google Play . Google Authenticator FreeOTP Authenticator Google Authenticator and FreeOTP Authenticator are the only supported token generators. Hardware tokens, SMS (text-message) tokens, and other apps are not supported. | null | https://docs.redhat.com/en/documentation/red_hat_customer_portal/1/html/using_two-factor_authentication/con-ciam-2fa-about_two-factor-authentication |
Chapter 8. Important links | Chapter 8. Important links Red Hat AMQ 7 Supported Configurations Red Hat AMQ 7 Component Details Revised on 2021-04-23 11:07:11 UTC | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/release_notes_for_amq_streams_1.7_on_openshift/important-links-str |
Chapter 3. Creating build inputs | Chapter 3. Creating build inputs Use the following sections for an overview of build inputs, instructions on how to use inputs to provide source content for builds to operate on, and how to use build environments and create secrets. 3.1. Build inputs A build input provides source content for builds to operate on. You can use the following build inputs to provide sources in OpenShift Dedicated, listed in order of precedence: Inline Dockerfile definitions Content extracted from existing images Git repositories Binary (Local) inputs Input secrets External artifacts You can combine multiple inputs in a single build. However, as the inline Dockerfile takes precedence, it can overwrite any other file named Dockerfile provided by another input. Binary (local) input and Git repositories are mutually exclusive inputs. You can use input secrets when you do not want certain resources or credentials used during a build to be available in the final application image produced by the build, or want to consume a value that is defined in a secret resource. External artifacts can be used to pull in additional files that are not available as one of the other build input types. When you run a build: A working directory is constructed and all input content is placed in the working directory. For example, the input Git repository is cloned into the working directory, and files specified from input images are copied into the working directory using the target path. The build process changes directories into the contextDir , if one is defined. The inline Dockerfile, if any, is written to the current directory. The content from the current directory is provided to the build process for reference by the Dockerfile, custom builder logic, or assemble script. This means any input content that resides outside the contextDir is ignored by the build. The following example of a source definition includes multiple input types and an explanation of how they are combined. For more details on how each input type is defined, see the specific sections for each input type. source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: "master" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: "app/dir" 3 dockerfile: "FROM centos:7\nRUN yum install -y httpd" 4 1 The repository to be cloned into the working directory for the build. 2 /usr/lib/somefile.jar from myinputimage is stored in <workingdir>/app/dir/injected/dir . 3 The working directory for the build becomes <original_workingdir>/app/dir . 4 A Dockerfile with this content is created in <original_workingdir>/app/dir , overwriting any existing file with that name. 3.2. Dockerfile source When you supply a dockerfile value, the content of this field is written to disk as a file named dockerfile . This is done after other input sources are processed, so if the input source repository contains a Dockerfile in the root directory, it is overwritten with this content. The source definition is part of the spec section in the BuildConfig : source: dockerfile: "FROM centos:7\nRUN yum install -y httpd" 1 1 The dockerfile field contains an inline Dockerfile that is built. Additional resources The typical use for this field is to provide a Dockerfile to a docker strategy build. 3.3. Image source You can add additional files to the build process with images. Input images are referenced in the same way the From and To image targets are defined. This means both container images and image stream tags can be referenced. In conjunction with the image, you must provide one or more path pairs to indicate the path of the files or directories to copy the image and the destination to place them in the build context. The source path can be any absolute path within the image specified. The destination must be a relative directory path. At build time, the image is loaded and the indicated files and directories are copied into the context directory of the build process. This is the same directory into which the source repository content is cloned. If the source path ends in /. then the content of the directory is copied, but the directory itself is not created at the destination. Image inputs are specified in the source definition of the BuildConfig : source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: "master" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar 1 An array of one or more input images and files. 2 A reference to the image containing the files to be copied. 3 An array of source/destination paths. 4 The directory relative to the build root where the build process can access the file. 5 The location of the file to be copied out of the referenced image. 6 An optional secret provided if credentials are needed to access the input image. Note If your cluster uses an ImageDigestMirrorSet , ImageTagMirrorSet , or ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. Images that require pull secrets When using an input image that requires a pull secret, you can link the pull secret to the service account used by the build. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the input image. To link a pull secret to the service account used by the build, run: USD oc secrets link builder dockerhub Note This feature is not supported for builds using the custom strategy. Images on mirrored registries that require pull secrets When using an input image from a mirrored registry, if you get a build error: failed to pull image message, you can resolve the error by using either of the following methods: Create an input secret that contains the authentication credentials for the builder image's repository and all known mirrors. In this case, create a pull secret for credentials to the image registry and its mirrors. Use the input secret as the pull secret on the BuildConfig object. 3.4. Git source When specified, source code is fetched from the supplied location. If you supply an inline Dockerfile, it overwrites the Dockerfile in the contextDir of the Git repository. The source definition is part of the spec section in the BuildConfig : source: git: 1 uri: "https://github.com/openshift/ruby-hello-world" ref: "master" contextDir: "app/dir" 2 dockerfile: "FROM openshift/ruby-22-centos7\nUSER example" 3 1 The git field contains the Uniform Resource Identifier (URI) to the remote Git repository of the source code. You must specify the value of the ref field to check out a specific Git reference. A valid ref can be a SHA1 tag or a branch name. The default value of the ref field is master . 2 The contextDir field allows you to override the default location inside the source code repository where the build looks for the application source code. If your application exists inside a sub-directory, you can override the default location (the root folder) using this field. 3 If the optional dockerfile field is provided, it should be a string containing a Dockerfile that overwrites any Dockerfile that may exist in the source repository. If the ref field denotes a pull request, the system uses a git fetch operation and then checkout FETCH_HEAD . When no ref value is provided, OpenShift Dedicated performs a shallow clone ( --depth=1 ). In this case, only the files associated with the most recent commit on the default branch (typically master ) are downloaded. This results in repositories downloading faster, but without the full commit history. To perform a full git clone of the default branch of a specified repository, set ref to the name of the default branch (for example main ). Warning Git clone operations that go through a proxy that is performing man in the middle (MITM) TLS hijacking or reencrypting of the proxied connection do not work. 3.4.1. Using a proxy If your Git repository can only be accessed using a proxy, you can define the proxy to use in the source section of the build configuration. You can configure both an HTTP and HTTPS proxy to use. Both fields are optional. Domains for which no proxying should be performed can also be specified in the NoProxy field. Note Your source URI must use the HTTP or HTTPS protocol for this to work. source: git: uri: "https://github.com/openshift/ruby-hello-world" ref: "master" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com Note For Pipeline strategy builds, given the current restrictions with the Git plugin for Jenkins, any Git operations through the Git plugin do not leverage the HTTP or HTTPS proxy defined in the BuildConfig . The Git plugin only uses the proxy configured in the Jenkins UI at the Plugin Manager panel. This proxy is then used for all git interactions within Jenkins, across all jobs. Additional resources You can find instructions on how to configure proxies through the Jenkins UI at JenkinsBehindProxy . 3.4.2. Source Clone Secrets Builder pods require access to any Git repositories defined as source for a build. Source clone secrets are used to provide the builder pod with access it would not normally have access to, such as private repositories or repositories with self-signed or untrusted SSL certificates. The following source clone secret configurations are supported: A .gitconfig file Basic authentication SSH key authentication Trusted certificate authorities Note You can also use combinations of these configurations to meet your specific needs. 3.4.2.1. Automatically adding a source clone secret to a build configuration When a BuildConfig is created, OpenShift Dedicated can automatically populate its source clone secret reference. This behavior allows the resulting builds to automatically use the credentials stored in the referenced secret to authenticate to a remote Git repository, without requiring further configuration. To use this functionality, a secret containing the Git repository credentials must exist in the namespace in which the BuildConfig is later created. This secrets must include one or more annotations prefixed with build.openshift.io/source-secret-match-uri- . The value of each of these annotations is a Uniform Resource Identifier (URI) pattern, which is defined as follows. When a BuildConfig is created without a source clone secret reference and its Git source URI matches a URI pattern in a secret annotation, OpenShift Dedicated automatically inserts a reference to that secret in the BuildConfig . Prerequisites A URI pattern must consist of: A valid scheme: *:// , git:// , http:// , https:// or ssh:// A host: *` or a valid hostname or IP address optionally preceded by *. A path: /* or / followed by any characters optionally including * characters In all of the above, a * character is interpreted as a wildcard. Important URI patterns must match Git source URIs which are conformant to RFC3986 . Do not include a username (or password) component in a URI pattern. For example, if you use ssh://[email protected]:7999/ATLASSIAN jira.git for a git repository URL, the source secret must be specified as ssh://bitbucket.atlassian.com:7999/* (and not ssh://[email protected]:7999/* ). USD oc annotate secret mysecret \ 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*' Procedure If multiple secrets match the Git URI of a particular BuildConfig , OpenShift Dedicated selects the secret with the longest match. This allows for basic overriding, as in the following example. The following fragment shows two partial source clone secrets, the first matching any server in the domain mycorp.com accessed by HTTPS, and the second overriding access to servers mydev1.mycorp.com and mydev2.mycorp.com : kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: ... --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data: ... Add a build.openshift.io/source-secret-match-uri- annotation to a pre-existing secret using: USD oc annotate secret mysecret \ 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*' 3.4.2.2. Manually adding a source clone secret Source clone secrets can be added manually to a build configuration by adding a sourceSecret field to the source section inside the BuildConfig and setting it to the name of the secret that you created. In this example, it is the basicsecret . apiVersion: "build.openshift.io/v1" kind: "BuildConfig" metadata: name: "sample-build" spec: output: to: kind: "ImageStreamTag" name: "sample-image:latest" source: git: uri: "https://github.com/user/app.git" sourceSecret: name: "basicsecret" strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "python-33-centos7:latest" Procedure You can also use the oc set build-secret command to set the source clone secret on an existing build configuration. To set the source clone secret on an existing build configuration, enter the following command: USD oc set build-secret --source bc/sample-build basicsecret 3.4.2.3. Creating a secret from a .gitconfig file If the cloning of your application is dependent on a .gitconfig file, then you can create a secret that contains it. Add it to the builder service account and then your BuildConfig . Procedure To create a secret from a .gitconfig file: USD oc create secret generic <secret_name> --from-file=<path/to/.gitconfig> Note SSL verification can be turned off if sslVerify=false is set for the http section in your .gitconfig file: [http] sslVerify=false 3.4.2.4. Creating a secret from a .gitconfig file for secured Git If your Git server is secured with two-way SSL and user name with password, you must add the certificate files to your source build and add references to the certificate files in the .gitconfig file. Prerequisites You must have Git credentials. Procedure Add the certificate files to your source build and add references to the certificate files in the .gitconfig file. Add the client.crt , cacert.crt , and client.key files to the /var/run/secrets/openshift.io/source/ folder in the application source code. In the .gitconfig file for the server, add the [http] section shown in the following example: # cat .gitconfig Example output [user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt Create the secret: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ 1 --from-literal=password=<password> \ 2 --from-file=.gitconfig=.gitconfig \ --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt \ --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt \ --from-file=client.key=/var/run/secrets/openshift.io/source/client.key 1 The user's Git user name. 2 The password for this user. Important To avoid having to enter your password again, be sure to specify the source-to-image (S2I) image in your builds. However, if you cannot clone the repository, you must still specify your user name and password to promote the build. Additional resources /var/run/secrets/openshift.io/source/ folder in the application source code. 3.4.2.5. Creating a secret from source code basic authentication Basic authentication requires either a combination of --username and --password , or a token to authenticate against the software configuration management (SCM) server. Prerequisites User name and password to access the private repository. Procedure Create the secret first before using the --username and --password to access the private repository: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --type=kubernetes.io/basic-auth Create a basic authentication secret with a token: USD oc create secret generic <secret_name> \ --from-literal=password=<token> \ --type=kubernetes.io/basic-auth 3.4.2.6. Creating a secret from source code SSH key authentication SSH key based authentication requires a private SSH key. The repository keys are usually located in the USDHOME/.ssh/ directory, and are named id_dsa.pub , id_ecdsa.pub , id_ed25519.pub , or id_rsa.pub by default. Procedure Generate SSH key credentials: USD ssh-keygen -t ed25519 -C "[email protected]" Note Creating a passphrase for the SSH key prevents OpenShift Dedicated from building. When prompted for a passphrase, leave it blank. Two files are created: the public key and a corresponding private key (one of id_dsa , id_ecdsa , id_ed25519 , or id_rsa ). With both of these in place, consult your source control management (SCM) system's manual on how to upload the public key. The private key is used to access your private repository. Before using the SSH key to access the private repository, create the secret: USD oc create secret generic <secret_name> \ --from-file=ssh-privatekey=<path/to/ssh/private/key> \ --from-file=<path/to/known_hosts> \ 1 --type=kubernetes.io/ssh-auth 1 Optional: Adding this field enables strict server host key check. Warning Skipping the known_hosts file while creating the secret makes the build vulnerable to a potential man-in-the-middle (MITM) attack. Note Ensure that the known_hosts file includes an entry for the host of your source code. 3.4.2.7. Creating a secret from source code trusted certificate authorities The set of Transport Layer Security (TLS) certificate authorities (CA) that are trusted during a Git clone operation are built into the OpenShift Dedicated infrastructure images. If your Git server uses a self-signed certificate or one signed by an authority not trusted by the image, you can create a secret that contains the certificate or disable TLS verification. If you create a secret for the CA certificate, OpenShift Dedicated uses it to access your Git server during the Git clone operation. Using this method is significantly more secure than disabling Git SSL verification, which accepts any TLS certificate that is presented. Procedure Create a secret with a CA certificate file. If your CA uses Intermediate Certificate Authorities, combine the certificates for all CAs in a ca.crt file. Enter the following command: USD cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt Create the secret by entering the following command: USD oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1 1 You must use the key name ca.crt . 3.4.2.8. Source secret combinations You can combine the different methods for creating source clone secrets for your specific needs. 3.4.2.8.1. Creating a SSH-based authentication secret with a .gitconfig file You can combine the different methods for creating source clone secrets for your specific needs, such as a SSH-based authentication secret with a .gitconfig file. Prerequisites SSH authentication A .gitconfig file Procedure To create a SSH-based authentication secret with a .gitconfig file, enter the following command: USD oc create secret generic <secret_name> \ --from-file=ssh-privatekey=<path/to/ssh/private/key> \ --from-file=<path/to/.gitconfig> \ --type=kubernetes.io/ssh-auth 3.4.2.8.2. Creating a secret that combines a .gitconfig file and CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a .gitconfig file and certificate authority (CA) certificate. Prerequisites A .gitconfig file CA certificate Procedure To create a secret that combines a .gitconfig file and CA certificate, enter the following command: USD oc create secret generic <secret_name> \ --from-file=ca.crt=<path/to/certificate> \ --from-file=<path/to/.gitconfig> 3.4.2.8.3. Creating a basic authentication secret with a CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and certificate authority (CA) certificate. Prerequisites Basic authentication credentials CA certificate Procedure To create a basic authentication secret with a CA certificate, enter the following command: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=ca-cert=</path/to/file> \ --type=kubernetes.io/basic-auth 3.4.2.8.4. Creating a basic authentication secret with a Git configuration file You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication and a .gitconfig file. Prerequisites Basic authentication credentials A .gitconfig file Procedure To create a basic authentication secret with a .gitconfig file, enter the following command: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=</path/to/.gitconfig> \ --type=kubernetes.io/basic-auth 3.4.2.8.5. Creating a basic authentication secret with a .gitconfig file and CA certificate You can combine the different methods for creating source clone secrets for your specific needs, such as a secret that combines a basic authentication, .gitconfig file, and certificate authority (CA) certificate. Prerequisites Basic authentication credentials A .gitconfig file CA certificate Procedure To create a basic authentication secret with a .gitconfig file and CA certificate, enter the following command: USD oc create secret generic <secret_name> \ --from-literal=username=<user_name> \ --from-literal=password=<password> \ --from-file=</path/to/.gitconfig> \ --from-file=ca-cert=</path/to/file> \ --type=kubernetes.io/basic-auth 3.5. Binary (local) source Streaming content from a local file system to the builder is called a Binary type build. The corresponding value of BuildConfig.spec.source.type is Binary for these builds. This source type is unique in that it is leveraged solely based on your use of the oc start-build . Note Binary type builds require content to be streamed from the local file system, so automatically triggering a binary type build, like an image change trigger, is not possible. This is because the binary files cannot be provided. Similarly, you cannot launch binary type builds from the web console. To utilize binary builds, invoke oc start-build with one of these options: --from-file : The contents of the file you specify are sent as a binary stream to the builder. You can also specify a URL to a file. Then, the builder stores the data in a file with the same name at the top of the build context. --from-dir and --from-repo : The contents are archived and sent as a binary stream to the builder. Then, the builder extracts the contents of the archive within the build context directory. With --from-dir , you can also specify a URL to an archive, which is extracted. --from-archive : The archive you specify is sent to the builder, where it is extracted within the build context directory. This option behaves the same as --from-dir ; an archive is created on your host first, whenever the argument to these options is a directory. In each of the previously listed cases: If your BuildConfig already has a Binary source type defined, it is effectively ignored and replaced by what the client sends. If your BuildConfig has a Git source type defined, it is dynamically disabled, since Binary and Git are mutually exclusive, and the data in the binary stream provided to the builder takes precedence. Instead of a file name, you can pass a URL with HTTP or HTTPS schema to --from-file and --from-archive . When using --from-file with a URL, the name of the file in the builder image is determined by the Content-Disposition header sent by the web server, or the last component of the URL path if the header is not present. No form of authentication is supported and it is not possible to use custom TLS certificate or disable certificate validation. When using oc new-build --binary=true , the command ensures that the restrictions associated with binary builds are enforced. The resulting BuildConfig has a source type of Binary , meaning that the only valid way to run a build for this BuildConfig is to use oc start-build with one of the --from options to provide the requisite binary data. The Dockerfile and contextDir source options have special meaning with binary builds. Dockerfile can be used with any binary build source. If Dockerfile is used and the binary stream is an archive, its contents serve as a replacement Dockerfile to any Dockerfile in the archive. If Dockerfile is used with the --from-file argument, and the file argument is named Dockerfile, the value from Dockerfile replaces the value from the binary stream. In the case of the binary stream encapsulating extracted archive content, the value of the contextDir field is interpreted as a subdirectory within the archive, and, if valid, the builder changes into that subdirectory before executing the build. 3.6. Input secrets and config maps Important To prevent the contents of input secrets and config maps from appearing in build output container images, use build volumes in your Docker build and source-to-image build strategies. In some scenarios, build operations require credentials or other configuration data to access dependent resources, but it is undesirable for that information to be placed in source control. You can define input secrets and input config maps for this purpose. For example, when building a Java application with Maven, you can set up a private mirror of Maven Central or JCenter that is accessed by private keys. To download libraries from that private mirror, you have to supply the following: A settings.xml file configured with the mirror's URL and connection settings. A private key referenced in the settings file, such as ~/.ssh/id_rsa . For security reasons, you do not want to expose your credentials in the application image. This example describes a Java application, but you can use the same approach for adding SSL certificates into the /etc/ssl/certs directory, API keys or tokens, license files, and more. 3.6.1. What is a secret? The Secret object type provides a mechanism to hold sensitive information such as passwords, OpenShift Dedicated client configuration files, dockercfg files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. YAML Secret Object Definition apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5 1 Indicates the structure of the secret's key names and values. 2 The allowable format for the keys in the data field must meet the guidelines in the DNS_SUBDOMAIN value in the Kubernetes identifiers glossary. 3 The value associated with keys in the data map must be base64 encoded. 4 Entries in the stringData map are converted to base64 and the entry are then moved to the data map automatically. This field is write-only. The value is only be returned by the data field. 5 The value associated with keys in the stringData map is made up of plain text strings. 3.6.1.1. Properties of secrets Key properties include: Secret data can be referenced independently from its definition. Secret data volumes are backed by temporary file-storage facilities (tmpfs) and never come to rest on a node. Secret data can be shared within a namespace. 3.6.1.2. Types of Secrets The value in the type field indicates the structure of the secret's key names and values. The type can be used to enforce the presence of user names and keys in the secret object. If you do not want validation, use the opaque type, which is the default. Specify one of the following types to trigger minimal server-side validation to ensure the presence of specific key names in the secret data: kubernetes.io/service-account-token . Uses a service account token. kubernetes.io/dockercfg . Uses the .dockercfg file for required Docker credentials. kubernetes.io/dockerconfigjson . Uses the .docker/config.json file for required Docker credentials. kubernetes.io/basic-auth . Use with basic authentication. kubernetes.io/ssh-auth . Use with SSH key authentication. kubernetes.io/tls . Use with TLS certificate authorities. Specify type= Opaque if you do not want validation, which means the secret does not claim to conform to any convention for key names or values. An opaque secret, allows for unstructured key:value pairs that can contain arbitrary values. Note You can specify other arbitrary types, such as example.com/my-secret-type . These types are not enforced server-side, but indicate that the creator of the secret intended to conform to the key/value requirements of that type. 3.6.1.3. Updates to secrets When you modify the value of a secret, the value used by an already running pod does not dynamically change. To change a secret, you must delete the original pod and create a new pod, in some cases with an identical PodSpec . Updating a secret follows the same workflow as deploying a new container image. You can use the kubectl rolling-update command. The resourceVersion value in a secret is not specified when it is referenced. Therefore, if a secret is updated at the same time as pods are starting, the version of the secret that is used for the pod is not defined. Note Currently, it is not possible to check the resource version of a secret object that was used when a pod was created. It is planned that pods report this information, so that a controller could restart ones using an old resourceVersion . In the interim, do not update the data of existing secrets, but create new ones with distinct names. 3.6.2. Creating secrets You must create a secret before creating the pods that depend on that secret. When creating secrets: Create a secret object with secret data. Update the pod service account to allow the reference to the secret. Create a pod, which consumes the secret as an environment variable or as a file using a secret volume. Procedure To create a secret object from a JSON or YAML file, enter the following command: USD oc create -f <filename> For example, you can create a secret from your local .docker/config.json file: USD oc create secret generic dockerhub \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson This command generates a JSON specification of the secret named dockerhub and creates the object. YAML Opaque Secret Object Definition apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password> 1 Specifies an opaque secret. Docker Configuration JSON File Secret Object Definition apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2 1 Specifies that the secret is using a docker configuration JSON file. 2 The output of a base64-encoded docker configuration JSON file. 3.6.3. Using secrets After creating secrets, you can create a pod to reference your secret, get logs, and delete the pod. Procedure Create the pod to reference your secret by entering the following command: USD oc create -f <your_yaml_file>.yaml Get the logs by entering the following command: USD oc logs secret-example-pod Delete the pod by entering the following command: USD oc delete pod secret-example-pod Additional resources Example YAML files with secret data: YAML file of a secret that will create four files apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB 1 File contains decoded values. 2 File contains decoded values. 3 File contains the provided string. 4 File contains the provided data. YAML file of a pod populating files in a volume with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "cat /etc/secret-volume/*" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never YAML file of a pod populating environment variables with secret data apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ "/bin/sh", "-c", "export" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never YAML file of a BuildConfig object that populates environment variables with secret data apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username 3.6.4. Adding input secrets and config maps To provide credentials and other configuration data to a build without placing them in source control, you can define input secrets and input config maps. In some scenarios, build operations require credentials or other configuration data to access dependent resources. To make that information available without placing it in source control, you can define input secrets and input config maps. Procedure To add an input secret, config maps, or both to an existing BuildConfig object: If the ConfigMap object does not exist, create it by entering the following command: USD oc create configmap settings-mvn \ --from-file=settings.xml=<path/to/settings.xml> This creates a new config map named settings-mvn , which contains the plain text content of the settings.xml file. Tip You can alternatively apply the following YAML to create the config map: apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings> If the Secret object does not exist, create it by entering the following command: USD oc create secret generic secret-mvn \ --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> \ --type=kubernetes.io/ssh-auth This creates a new secret named secret-mvn , which contains the base64 encoded content of the id_rsa private key. Tip You can alternatively apply the following YAML to create the input secret: apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded Add the config map and secret to the source section in the existing BuildConfig object: source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn To include the secret and config map in a new BuildConfig object, enter the following command: USD oc new-build \ openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \ --context-dir helloworld --build-secret "secret-mvn" \ --build-config-map "settings-mvn" During the build, the build process copies the settings.xml and id_rsa files into the directory where the source code is located. In OpenShift Dedicated S2I builder images, this is the image working directory, which is set using the WORKDIR instruction in the Dockerfile . If you want to specify another directory, add a destinationDir to the definition: source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: ".m2" secrets: - secret: name: secret-mvn destinationDir: ".ssh" You can also specify the destination directory when creating a new BuildConfig object by entering the following command: USD oc new-build \ openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git \ --context-dir helloworld --build-secret "secret-mvn:.ssh" \ --build-config-map "settings-mvn:.m2" In both cases, the settings.xml file is added to the ./.m2 directory of the build environment, and the id_rsa key is added to the ./.ssh directory. 3.6.5. Source-to-image strategy When using a Source strategy, all defined input secrets are copied to their respective destinationDir . If you left destinationDir empty, then the secrets are placed in the working directory of the builder image. The same rule is used when a destinationDir is a relative path. The secrets are placed in the paths that are relative to the working directory of the image. The final directory in the destinationDir path is created if it does not exist in the builder image. All preceding directories in the destinationDir must exist, or an error will occur. Note Input secrets are added as world-writable, have 0666 permissions, and are truncated to size zero after executing the assemble script. This means that the secret files exist in the resulting image, but they are empty for security reasons. Input config maps are not truncated after the assemble script completes. 3.7. External artifacts It is not recommended to store binary files in a source repository. Therefore, you must define a build which pulls additional files, such as Java .jar dependencies, during the build process. How this is done depends on the build strategy you are using. For a Source build strategy, you must put appropriate shell commands into the assemble script: .s2i/bin/assemble File #!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar .s2i/bin/run File #!/bin/sh exec java -jar app.jar For a Docker build strategy, you must modify the Dockerfile and invoke shell commands with the RUN instruction : Excerpt of Dockerfile FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ "java", "-jar", "app.jar" ] In practice, you may want to use an environment variable for the file location so that the specific file to be downloaded can be customized using an environment variable defined on the BuildConfig , rather than updating the Dockerfile or assemble script. You can choose between different methods of defining environment variables: Using the .s2i/environment file (only for a Source build strategy) Setting the variables in the BuildConfig object Providing the variables explicitly using the oc start-build --env command (only for builds that are triggered manually) 3.8. Using docker credentials for private registries You can supply builds with a . docker/config.json file with valid credentials for private container registries. This allows you to push the output image into a private container image registry or pull a builder image from the private container image registry that requires authentication. You can supply credentials for multiple repositories within the same registry, each with credentials specific to that registry path. Note For the OpenShift Dedicated container image registry, this is not required because secrets are generated automatically for you by OpenShift Dedicated. The .docker/config.json file is found in your home directory by default and has the following format: auths: index.docker.io/v1/: 1 auth: "YWRfbGzhcGU6R2labnRib21ifTE=" 2 email: "[email protected]" 3 docker.io/my-namespace/my-user/my-image: 4 auth: "GzhYWRGU6R2fbclabnRgbkSp="" email: "[email protected]" docker.io/my-namespace: 5 auth: "GzhYWRGU6R2deesfrRgbkSp="" email: "[email protected]" 1 URL of the registry. 2 Encrypted password. 3 Email address for the login. 4 URL and credentials for a specific image in a namespace. 5 URL and credentials for a registry namespace. You can define multiple container image registries or define multiple repositories in the same registry. Alternatively, you can also add authentication entries to this file by running the docker login command. The file will be created if it does not exist. Kubernetes provides Secret objects, which can be used to store configuration and passwords. Prerequisites You must have a .docker/config.json file. Procedure Create the secret from your local .docker/config.json file by entering the following command: USD oc create secret generic dockerhub \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson This generates a JSON specification of the secret named dockerhub and creates the object. Add a pushSecret field into the output section of the BuildConfig and set it to the name of the secret that you created, which in the example is dockerhub : spec: output: to: kind: "DockerImage" name: "private.registry.com/org/private-image:latest" pushSecret: name: "dockerhub" You can use the oc set build-secret command to set the push secret on the build configuration: USD oc set build-secret --push bc/sample-build dockerhub You can also link the push secret to the service account used by the build instead of specifying the pushSecret field. By default, builds use the builder service account. The push secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build's output image. USD oc secrets link builder dockerhub Pull the builder container image from a private container image registry by specifying the pullSecret field, which is part of the build strategy definition: strategy: sourceStrategy: from: kind: "DockerImage" name: "docker.io/user/private_repository" pullSecret: name: "dockerhub" You can use the oc set build-secret command to set the pull secret on the build configuration: USD oc set build-secret --pull bc/sample-build dockerhub Note This example uses pullSecret in a Source build, but it is also applicable in Docker and Custom builds. You can also link the pull secret to the service account used by the build instead of specifying the pullSecret field. By default, builds use the builder service account. The pull secret is automatically added to the build if the secret contains a credential that matches the repository hosting the build's input image. To link the pull secret to the service account used by the build instead of specifying the pullSecret field, enter the following command: USD oc secrets link builder dockerhub Note You must specify a from image in the BuildConfig spec to take advantage of this feature. Docker strategy builds generated by oc new-build or oc new-app may not do this in some situations. 3.9. Build environments As with pod environment variables, build environment variables can be defined in terms of references to other resources or variables using the Downward API. There are some exceptions, which are noted. You can also manage environment variables defined in the BuildConfig with the oc set env command. Note Referencing container resources using valueFrom in build environment variables is not supported as the references are resolved before the container is created. 3.9.1. Using build fields as environment variables You can inject information about the build object by setting the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value. Note Jenkins Pipeline strategy does not support valueFrom syntax for environment variables. Procedure Set the fieldPath environment variable source to the JsonPath of the field from which you are interested in obtaining the value: env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name 3.9.2. Using secrets as environment variables You can make key values from secrets available as environment variables using the valueFrom syntax. Important This method shows the secrets as plain text in the output of the build pod console. To avoid this, use input secrets and config maps instead. Procedure To use a secret as an environment variable, set the valueFrom syntax: apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret Additional resources Input secrets and config maps 3.10. Service serving certificate secrets Service serving certificate secrets are intended to support complex middleware applications that need out-of-the-box certificates. It has the same settings as the server certificates generated by the administrator tooling for nodes and masters. Procedure To secure communication to your service, have the cluster generate a signed serving certificate/key pair into a secret in your namespace. Set the service.beta.openshift.io/serving-cert-secret-name annotation on your service with the value set to the name you want to use for your secret. Then, your PodSpec can mount that secret. When it is available, your pod runs. The certificate is good for the internal service DNS name, <service.name>.<service.namespace>.svc . The certificate and key are in PEM format, stored in tls.crt and tls.key respectively. The certificate/key pair is automatically replaced when it gets close to expiration. View the expiration date in the service.beta.openshift.io/expiry annotation on the secret, which is in RFC3339 format. Note In most cases, the service DNS name <service.name>.<service.namespace>.svc is not externally routable. The primary use of <service.name>.<service.namespace>.svc is for intracluster or intraservice communication, and with re-encrypt routes. Other pods can trust cluster-created certificates, which are only signed for internal DNS names, by using the certificate authority (CA) bundle in the /var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt file that is automatically mounted in their pod. The signature algorithm for this feature is x509.SHA256WithRSA . To manually rotate, delete the generated secret. A new certificate is created. 3.11. Secrets restrictions To use a secret, a pod needs to reference the secret. A secret can be used with a pod in three ways: To populate environment variables for containers. As files in a volume mounted on one or more of its containers. By kubelet when pulling images for the pod. Volume type secrets write data into the container as a file using the volume mechanism. imagePullSecrets use service accounts for the automatic injection of the secret into all pods in a namespaces. When a template contains a secret definition, the only way for the template to use the provided secret is to ensure that the secret volume sources are validated and that the specified object reference actually points to an object of type Secret . Therefore, a secret needs to be created before any pods that depend on it. The most effective way to ensure this is to have it get injected automatically through the use of a service account. Secret API objects reside in a namespace. They can only be referenced by pods in that same namespace. Individual secrets are limited to 1MB in size. This is to discourage the creation of large secrets that would exhaust apiserver and kubelet memory. However, creation of a number of smaller secrets could also exhaust memory. | [
"source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: \"master\" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: \"app/dir\" 3 dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 4",
"source: dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 1",
"source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: \"master\" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar",
"oc secrets link builder dockerhub",
"source: git: 1 uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" contextDir: \"app/dir\" 2 dockerfile: \"FROM openshift/ruby-22-centos7\\nUSER example\" 3",
"source: git: uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com",
"oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*'",
"kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data:",
"oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'",
"apiVersion: \"build.openshift.io/v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\" source: git: uri: \"https://github.com/user/app.git\" sourceSecret: name: \"basicsecret\" strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"python-33-centos7:latest\"",
"oc set build-secret --source bc/sample-build basicsecret",
"oc create secret generic <secret_name> --from-file=<path/to/.gitconfig>",
"[http] sslVerify=false",
"cat .gitconfig",
"[user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt",
"oc create secret generic <secret_name> --from-literal=username=<user_name> \\ 1 --from-literal=password=<password> \\ 2 --from-file=.gitconfig=.gitconfig --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt --from-file=client.key=/var/run/secrets/openshift.io/source/client.key",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=password=<token> --type=kubernetes.io/basic-auth",
"ssh-keygen -t ed25519 -C \"[email protected]\"",
"oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/known_hosts> \\ 1 --type=kubernetes.io/ssh-auth",
"cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt",
"oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1",
"oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/.gitconfig> --type=kubernetes.io/ssh-auth",
"oc create secret generic <secret_name> --from-file=ca.crt=<path/to/certificate> --from-file=<path/to/.gitconfig>",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --type=kubernetes.io/basic-auth",
"oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth",
"apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5",
"oc create -f <filename>",
"oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>",
"apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2",
"oc create -f <your_yaml_file>.yaml",
"oc logs secret-example-pod",
"oc delete pod secret-example-pod",
"apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never",
"apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username",
"oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>",
"apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings>",
"oc create secret generic secret-mvn --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> --type=kubernetes.io/ssh-auth",
"apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded",
"source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn",
"oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn\" --build-config-map \"settings-mvn\"",
"source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: \".m2\" secrets: - secret: name: secret-mvn destinationDir: \".ssh\"",
"oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn:.ssh\" --build-config-map \"settings-mvn:.m2\"",
"#!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar",
"#!/bin/sh exec java -jar app.jar",
"FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ \"java\", \"-jar\", \"app.jar\" ]",
"auths: index.docker.io/v1/: 1 auth: \"YWRfbGzhcGU6R2labnRib21ifTE=\" 2 email: \"[email protected]\" 3 docker.io/my-namespace/my-user/my-image: 4 auth: \"GzhYWRGU6R2fbclabnRgbkSp=\"\" email: \"[email protected]\" docker.io/my-namespace: 5 auth: \"GzhYWRGU6R2deesfrRgbkSp=\"\" email: \"[email protected]\"",
"oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"spec: output: to: kind: \"DockerImage\" name: \"private.registry.com/org/private-image:latest\" pushSecret: name: \"dockerhub\"",
"oc set build-secret --push bc/sample-build dockerhub",
"oc secrets link builder dockerhub",
"strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"docker.io/user/private_repository\" pullSecret: name: \"dockerhub\"",
"oc set build-secret --pull bc/sample-build dockerhub",
"oc secrets link builder dockerhub",
"env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/builds_using_buildconfig/creating-build-inputs |
Chapter 11. GNOME Shell Extensions | Chapter 11. GNOME Shell Extensions This chapter introduces system-wide configuration of GNOME Shell Extensions. You will learn how to view the extensions, how to enable them, how to lock a list of enabled extensions or how to set up several extensions as mandatory for the users of the system. You will be using dconf when configuring GNOME Shell Extensions, setting the following two GSettings keys: org.gnome.shell.enabled-extensions org.gnome.shell.development-tools For more information on dconf and GSettings , see Chapter 9, Configuring Desktop with GSettings and dconf . 11.1. What Are GNOME Shell Extensions? GNOME Shell extensions allow for the customization of the default GNOME Shell interface and its parts, such as window management and application launching. Each GNOME Shell extension is identified by a unique identifier, the uuid. The uuid is also used for the name of the directory where an extension is installed. You can either install the extension per-user in ~/.local/share/gnome-shell/extensions/ uuid , or machine-wide in /usr/share/gnome-shell/extensions/ uuid . The uuid identifier is globally-unique. When choosing it, remember that the uuid must possess the following properties to prevent certain attacks: Your uuid must not contain Unicode characters. Your uuid must not contain the gnome.org ending as it must not appear to be affiliated with the GNOME Project. Your uuid must contain only alphanumerical characters, the period (.), the at symbol (@), and the underscore (_). Important Before deploying third-party GNOME Shell extensions on Red Hat Enterprise Linux, make sure to read the following document to learn about the Red Hat support policy for third-party software: How does Red Hat Global Support Services handle third-party software, drivers, and/or uncertified hardware/hypervisors? To view installed extensions, you can use Looking Glass , GNOME Shell's integrated debugger and inspector tool. Procedure 11.1. View installed extensions Press Alt + F2 . Type in lg and press Enter to open Looking Glass . On the top bar of Looking Glass , click Extensions to open the list of installed extensions. Figure 11.1. Viewing Installed extensions with Looking Glass | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/GNOME-shell-extensions |
6.3. Removing Subscriptions | 6.3. Removing Subscriptions To remove a particular subscription, complete the following steps. Determine the serial number of the subscription you want to remove by listing information about already attached subscriptions: subscription-manager list --consumed The serial number is the number listed as serial . For instance, 744993814251016831 in the example below: Enter a command as follows to remove the selected subscription: subscription-manager remove --serial= serial_number Replace serial_number with the serial number you determined in the step. To remove all subscriptions attached to the system, run the following command: subscription-manager remove --all | [
"SKU: ES0113909 Contract: 01234567 Account: 1234567 Serial: 744993814251016831 Pool ID: 8a85f9894bba16dc014bccdd905a5e23 Active: False Quantity Used: 1 Service Level: SELF-SUPPORT Service Type: L1-L3 Status Details: Subscription Type: Standard Starts: 02/27/2015 Ends: 02/27/2016 System Type: Virtual"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-subscription_and_support-registering_a_system_and_managing_subscriptions-removing_subscriptions |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_on_any_platform/making-open-source-more-inclusive |
Providing feedback on Red Hat build of Quarkus documentation | Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/configure_data_sources/proc_providing-feedback-on-red-hat-documentation_configure-data-sources |
Chapter 5. Building functions | Chapter 5. Building functions To run a function, you first must build the function project. This happens automatically when using the kn func run command, but you can also build a function without running it. 5.1. Building a function Before you can run a function, you must build the function project. If you are using the kn func run command, the function is built automatically. However, you can use the kn func build command to build a function without running it, which can be useful for advanced users or debugging scenarios. The kn func build command creates an OCI container image that can be run locally on your computer or on an OpenShift Container Platform cluster. This command uses the function project name and the image registry name to construct a fully qualified image name for your function. 5.1.1. Image container types By default, kn func build creates a container image by using Red Hat Source-to-Image (S2I) technology. Example build command using Red Hat Source-to-Image (S2I) USD kn func build 5.1.2. Image registry types The OpenShift Container Registry is used by default as the image registry for storing function images. Example build command using OpenShift Container Registry USD kn func build Example output Building function image Function image has been built, image: registry.redhat.io/example/example-function:latest You can override using OpenShift Container Registry as the default image registry by using the --registry flag: Example build command overriding OpenShift Container Registry to use quay.io USD kn func build --registry quay.io/username Example output Building function image Function image has been built, image: quay.io/username/example-function:latest 5.1.3. Push flag You can add the --push flag to a kn func build command to automatically push the function image after it is successfully built: Example build command using OpenShift Container Registry USD kn func build --push 5.1.4. Help command You can use the help command to learn more about kn func build command options: Build help command USD kn func help build | [
"kn func build",
"kn func build",
"Building function image Function image has been built, image: registry.redhat.io/example/example-function:latest",
"kn func build --registry quay.io/username",
"Building function image Function image has been built, image: quay.io/username/example-function:latest",
"kn func build --push",
"kn func help build"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/functions/serverless-functions-building |
3.6. Moving a Process to a Control Group | 3.6. Moving a Process to a Control Group Move a process into a cgroup by running the cgclassify command: where: controllers is a comma-separated list of resource controllers, or /* to launch the process in the hierarchies associated with all available subsystems. Note that if there are multiple cgroups of the same name, the -g option moves the processes in each of those groups. path_to_cgroup is the path to the cgroup within the hierarchy; pidlist is a space-separated list of process identifier (PIDs). If the -g option is not specified, cgclassify automatically searches /etc/cgrules.conf and uses the first applicable configuration line. According to this line, cgclassify determines the hierarchies and cgroups to move the process under. Note that for the move to be successful, the destination hierarchies have to exist. The subsystems specified in /etc/cgrules.conf has to be also properly configured for the corresponding hierarchy in /etc/cgconfig.conf . You can also add the --sticky option before the pid to keep any child processes in the same cgroup. If you do not set this option and the cgred service is running, child processes will be allocated to cgroups based on the settings found in /etc/cgrules.conf . The process itself, however, will remain in the cgroup in which you started it. It is also possible to use the cgred service (which starts the cgrulesengd service) that moves tasks into cgroups according to parameters set in the /etc/cgrules.conf file. Use cgred only to manage manually attached controllers. Entries in the /etc/cgrules.conf file can take one of the two forms: user subsystems control_group ; user : command subsystems control_group . For example: This entry specifies that any processes that belong to the user named maria access the devices subsystem according to the parameters specified in the /usergroup/staff cgroup. To associate particular commands with particular cgroups, add the command parameter, as follows: The entry now specifies that when the user named maria uses the ftp command, the process is automatically moved to the /usergroup/staff/ftp cgroup in the hierarchy that contains the devices subsystem. Note, however, that the daemon moves the process to the cgroup only after the appropriate condition is fulfilled. Therefore, the ftp process can run for a short time in an incorrect group. Furthermore, if the process quickly spawns children while in the incorrect group, these children might not be moved. Entries in the /etc/cgrules.conf file can include the following extra notation: @ - when prefixed to user , indicates a group instead of an individual user. For example, @admins are all users in the admins group. \* - represents "all". For example, \* in the subsystem field represents all subsystems. % - represents an item the same as the item on the line above. For example: | [
"~]# cgclassify -g controllers : path_to_cgroup pidlist",
"maria net_prio /usergroup/staff",
"maria:ftp devices /usergroup/staff/ftp",
"@adminstaff net_prio /admingroup @labstaff % %"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/sec-moving_a_process_to_a_control_group |
Console APIs | Console APIs OpenShift Container Platform 4.14 Reference guide for console APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/console_apis/index |
Chapter 5. Installing a cluster on OpenStack on your own infrastructure | Chapter 5. Installing a cluster on OpenStack on your own infrastructure In OpenShift Container Platform version 4.14, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.14 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 5.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 5.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 5.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 5.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 5.4. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 5.5. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 5.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.8. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.14 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 5.9. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 5.10. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 5.10.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 5.10.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 5.11. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 5.12. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. Additional resources Installation configuration parameters for OpenStack 5.12.1. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 5.12.2. Sample customized install-config.yaml file for RHOSP This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 5.12.3. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 5.12.4. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 5.12.5. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 5.12.5.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 5.12.5.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 5.13. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines, compute machine sets, and control plane machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 5.14. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 5.15. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 5.16. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform postinstallation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" Optionally, you can use the inventory.yaml file that you created to customize your installation. For example, you can deploy a cluster that uses bare metal machines. 5.16.1. Deploying a cluster with bare metal machines If you want your cluster to use bare metal machines, modify the inventory.yaml file. Your cluster can have both control plane and compute machines running on bare metal, or just compute machines. Bare-metal compute machines are not supported on clusters that use Kuryr. Note Be sure that your install-config.yaml file reflects whether the RHOSP network that you use for bare metal workers supports floating IP addresses or not. Prerequisites The RHOSP Bare Metal service (Ironic) is enabled and accessible via the RHOSP Compute API. Bare metal is available as a RHOSP flavor . If your cluster runs on an RHOSP version that is more than 16.1.6 and less than 16.2.4, bare metal workers do not function due to a known issue that causes the metadata service to be unavailable for services on OpenShift Container Platform nodes. The RHOSP network supports both VM and bare metal server attachment. If you want to deploy the machines on a pre-existing network, a RHOSP subnet is provisioned. If you want to deploy the machines on an installer-provisioned network, the RHOSP Bare Metal service (Ironic) is able to listen for and interact with Preboot eXecution Environment (PXE) boot machines that run on tenant networks. You created an inventory.yaml file as part of the OpenShift Container Platform installation process. Procedure In the inventory.yaml file, edit the flavors for machines: If you want to use bare-metal control plane machines, change the value of os_flavor_master to a bare metal flavor. Change the value of os_flavor_worker to a bare metal flavor. An example bare metal inventory.yaml file all: hosts: localhost: ansible_connection: local ansible_python_interpreter: "{{ansible_playbook_python}}" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external' ... 1 If you want to have bare-metal control plane machines, change this value to a bare metal flavor. 2 Change this value to a bare metal flavor to use for compute machines. Use the updated inventory.yaml file to complete the installation process. Machines that are created during deployment use the flavor that you added to the file. Note The installer may time out while waiting for bare metal machines to boot. If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug 5.17. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 5.18. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 5.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 5.20. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 5.21. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 5.22. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 5.23. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 5.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.25. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses . | [
"sudo subscription-manager register # If not done already",
"sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already",
"sudo subscription-manager repos --disable=* # If not done already",
"sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack",
"sudo alternatives --set python /usr/bin/python3",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.14/upi/openstack/down-containers.yaml'",
"tar -xvf openshift-install-linux.tar.gz",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"file <name_of_downloaded_file>",
"openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos",
"openstack network list --long -c ID -c Name -c \"Router Type\"",
"+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+",
"openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>",
"openstack floating ip create --description \"bootstrap machine\" <external_network>",
"api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>",
"api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>",
"clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'",
"clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"",
"oc edit configmap -n openshift-config cloud-provider-config",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"openstack network create --project openshift",
"openstack subnet create --project openshift",
"openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2",
"platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"export INFRA_ID=USD(jq -r .infraID metadata.json)",
"import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)",
"openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>",
"openstack image show <image_name>",
"openstack catalog show image",
"openstack token issue -c id -f value",
"{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }",
"for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done",
"# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'",
"# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'",
"ansible-playbook -i inventory.yaml security-groups.yaml",
"ansible-playbook -i inventory.yaml network.yaml",
"openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"",
"all: hosts: localhost: ansible_connection: local ansible_python_interpreter: \"{{ansible_playbook_python}}\" # User-provided values os_subnet_range: '10.0.0.0/16' os_flavor_master: 'my-bare-metal-flavor' 1 os_flavor_worker: 'my-bare-metal-flavor' 2 os_image_rhcos: 'rhcos' os_external_network: 'external'",
"./openshift-install wait-for install-complete --log-level debug",
"ansible-playbook -i inventory.yaml bootstrap.yaml",
"openstack console log show \"USDINFRA_ID-bootstrap\"",
"ansible-playbook -i inventory.yaml control-plane.yaml",
"openshift-install wait-for bootstrap-complete",
"INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"ansible-playbook -i inventory.yaml down-bootstrap.yaml",
"ansible-playbook -i inventory.yaml compute-nodes.yaml",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"openshift-install --log-level debug wait-for install-complete"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_openstack/installing-openstack-user |
7.4. Additional Configuration for Identity and Authentication Providers | 7.4. Additional Configuration for Identity and Authentication Providers 7.4.1. Adjusting User Name Formats 7.4.1.1. Defining the Regular Expression for Parsing Full User Names SSSD parses full user name strings into the user name and domain components. By default, SSSD interprets full user names in the format user_name@domain_name based on the following regular expression in Python syntax: Note For Identity Management and Active Directory providers, the default user name format is user_name @ domain_name or NetBIOS_name \ user_name . To adjust how SSSD interprets full user names: Open the /etc/sssd/sssd.conf file. Use the re_expression option to define a custom regular expression. To define the regular expressions globally for all domains, add re_expression to the [sssd] section of sssd.conf . To define the regular expressions individually for a particular domain, add re_expression to the corresponding domain section of sssd.conf . For example, to configure a regular expression for the LDAP domain: For details, see the descriptions for re_expression in the SPECIAL SECTIONS and DOMAIN SECTIONS parts of the sssd.conf (5) man page. 7.4.1.2. Defining How SSSD Prints Full User Names If the use_fully_qualified_names option is enabled in the /etc/sssd/sssd.conf file, SSSD prints full user names in the format name@domain based on the following expansion by default: Note If use_fully_qualified_names is not set or is explicitly set to false for trusted domains, only the user name is printed, without the domain component. To adjust the format in which SSSD prints full user names: Open the /etc/sssd/sssd.conf file. Use the full_name_format option to define the expansion for the full user name format: To define the expansion globally for all domains, add full_name_format to the [sssd] section of sssd.conf . To define the expansion individually for a particular domain, add full_name_format to the corresponding domain section of sssd.conf . For details, see the descriptions for full_name_format in the SPECIAL SECTIONS and DOMAIN SECTIONS parts of the sssd.conf (5) man page. In some name configurations, SSSD could strip the domain component of the name, which can cause authentication errors. Because of this, if you set full_name_format to a non-standard value, a warning will prompt you to change it to a more standard format. 7.4.2. Enabling Offline Authentication SSSD does not cache user credentials by default. When processing authentication requests, SSSD always contacts the identity provider. If the provider is unavailable, user authentication fails. Important SSSD never caches passwords in plain text. It stores only a hash of the password. To ensure that users can authenticate even when the identity provider is unavailable, enable credential caching: Open the /etc/sssd/sssd.conf file. In a domain section, add the cache_credentials = true setting: Optional, but recommended. Configure a time limit for how long SSSD allows offline authentication if the identity provider is unavailable. Configure the PAM service to work with SSSD. See Section 7.5.2, "Configuring Services: PAM" . Use the offline_credentials_expiration option to specify the time limit. For example, to specify that users are able to authenticate offline for 3 days since the last successful login: For details on offline_credentials_expiration , see the sssd.conf (5) man page. 7.4.3. Configuring DNS Service Discovery If the identity or authentication server is not explicitly defined in the /etc/sssd/sssd.conf file, SSSD can discover the server dynamically using DNS service discovery [1] . For example, if sssd.conf includes the id_provider = ldap setting, but the ldap_uri option does not specify any host name or IP address, SSSD uses DNS service discovery to discover the server dynamically. Note SSSD cannot dynamically discover backup servers, only the primary server. Configuring SSSD for DNS Service Discovery Open the /etc/sssd/sssd.conf file. Set the primary server value to _srv_ . For an LDAP provider, the primary server is set using the ldap_uri option: Enable service discovery in the password change provider by setting a service type: Optional. By default, the service discovery uses the domain portion of the system host name as the domain name. To use a different DNS domain, specify the domain name in the dns_discovery_domain option. Optional. By default, the service discovery scans for the LDAP service type. To use a different service type, specify the type in the ldap_dns_service_name option. Optional. By default, SSSD attempts to look up an IPv4 address. If the attempt fails, SSSD attempts to look up an IPv6 address. To customize this behavior, use the lookup_family_order option. See the sssd.conf (5) man page for details. For every service with which you want to use service discovery, add a DNS record to the DNS server: 7.4.4. Defining Access Control Using the simple Access Provider The simple access provider allows or denies access based on a list of user names or groups. It enables you to restrict access to specific machines. For example, on company laptops, you can use the simple access provider to restrict access to only a specific user or a specific group. Other users or groups will not be allowed to log in even if they authenticate successfully against the configured authentication provider. Configuring simple Access Provider Rules Open the /etc/sssd/sssd.conf file. Set the access_provider option to simple : Define the access control rules for users. Choose one of the following: To allow access to users, use the simple_allow_users option. To deny access to users, use the simple_deny_users option. Important Allowing access to specific users is considered safer than denying. If you deny access to specific users, you automatically allow access to everyone else. Define the access control rules for groups. Choose one of the following: To allow access to groups, use the simple_allow_groups option. To deny access to groups, use the simple_deny_groups option. Important Allowing access to specific groups is considered safer than denying. If you deny access to specific groups, you automatically allow access to everyone else. The following example allows access to user1 , user2 , and members of group1 , while denying access to all other users. For details, see the sssd-simple (5) man page. 7.4.5. Defining Access Control Using the LDAP Access Filter When the access_provider option is set in /etc/sssd/sssd.conf , SSSD uses the specified access provider to evaluate which users are granted access to the system. If the access provider you are using is an extension of the LDAP provider type, you can also specify an LDAP access control filter that a user must match in order to be allowed access to the system. For example, when using an Active Directory (AD) server as the access provider, you can restrict access to the Linux system only to specified AD users. All other users that do not match the specified filter will be denied access. Note The access filter is applied on the LDAP user entry only. Therefore, using this type of access control on nested groups might not work. To apply access control on nested groups, see Section 7.4.4, "Defining Access Control Using the simple Access Provider" . Important When using offline caching, SSSD checks if the user's most recent online login attempt was successful. Users who logged in successfully during the most recent online login will still be able to log in offline, even if they do not match the access filter. Configuring SSSD to Apply an LDAP Access Filter Open the /etc/sssd/sssd.conf file. In the [domain] section, specify the LDAP access control filter. For an LDAP access provider, use the ldap_access_filter option. See the sssd-ldap (5) man page for details. For an AD access provider, use the ad_access_filter option. See the sssd-ad (5) man page for details. For example, to allow access only to AD users who belong to the admins user group and have a unixHomeDirectory attribute set: SSSD can also check results by the authorizedService or host attribute in an entry. In fact, all options - LDAP filter, authorizedService , and host - can be evaluated, depending on the user entry and the configuration. The ldap_access_order parameter lists all access control methods to use, in order of how they should be evaluated. The attributes in the user entry to use to evaluate authorized services or allowed hosts can be customized. Additional access control parameters are listed in the sssd-ldap(5) man page. [1] DNS service discovery enables applications to check the SRV records in a given domain for certain services of a certain type, and then returns any servers that match the required type. DNS service discovery is defined in RFC 2782 . | [
"(?P<name>[^@]+)@?(?P<domain>[^@]*USD)",
"[domain/ LDAP ] [... file truncated ...] re_expression = (?P<domain>[^\\\\]*?)\\\\?(?P<name>[^\\\\]+USD)",
"%1USDs@%2USDs",
"[domain/ domain_name ] cache_credentials = true",
"[pam] offline_credentials_expiration = 3",
"[domain/ domain_name ] id_provider = ldap ldap_uri = _srv_",
"[domain/ domain_name ] id_provider = ldap ldap_uri = _srv_ chpass_provider = ldap ldap_chpass_dns_service_name = ldap",
"_service._protocol._domain TTL priority weight port host_name",
"[domain/ domain_name ] access_provider = simple",
"[domain/ domain_name ] access_provider = simple simple_allow_users = user1 , user2 simple_allow_groups = group1",
"[domain/ AD_domain_name ] access provider = ad [... file truncated ...] ad_access_filter = (&(memberOf=cn=admins,ou=groups,dc=example,dc=com)(unixHomeDirectory=*))",
"[domain/example.com] access_provider = ldap ldap_access_filter = memberOf=cn=allowedusers,ou=Groups,dc=example,dc=com ldap_access_order = filter, host, authorized_service"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/sssd-configure-additional-provider-options |
Planning and Prerequisites Guide | Planning and Prerequisites Guide Red Hat Virtualization 4.3 Planning the installation and configuration of Red Hat Virtualization 4.3 Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document provides requirements, options, and recommendations for Red Hat Virtualization environments. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/planning_and_prerequisites_guide/index |
Chapter 7. Installing a Red Hat Enterprise Linux 6 Guest Virtual Machine on a Red Hat Enterprise Linux 6 Host | Chapter 7. Installing a Red Hat Enterprise Linux 6 Guest Virtual Machine on a Red Hat Enterprise Linux 6 Host This chapter covers how to install a Red Hat Enterprise Linux 6 guest virtual machine on a Red Hat Enterprise Linux 6 host. These procedures assume that the KVM hypervisor and all other required packages are installed and the host is configured for virtualization. Note For more information on installing the virtualization packages, refer to Chapter 5, Installing the Virtualization Packages . 7.1. Creating a Red Hat Enterprise Linux 6 Guest with Local Installation Media This procedure covers creating a Red Hat Enterprise Linux 6 guest virtual machine with a locally stored installation DVD or DVD image. DVD images are available from http://access.redhat.com for Red Hat Enterprise Linux 6. Procedure 7.1. Creating a Red Hat Enterprise Linux 6 guest virtual machine with virt-manager Optional: Preparation Prepare the storage environment for the virtual machine. For more information on preparing storage, refer to the Red Hat Enterprise Linux 6 Virtualization Administration Guide . Important Various storage types may be used for storing guest virtual machines. However, for a virtual machine to be able to use migration features the virtual machine must be created on networked storage. Red Hat Enterprise Linux 6 requires at least 1GB of storage space. However, Red Hat recommends at least 5GB of storage space for a Red Hat Enterprise Linux 6 installation and for the procedures in this guide. Open virt-manager and start the wizard Open virt-manager by executing the virt-manager command as root or opening Applications System Tools Virtual Machine Manager . Figure 7.1. The Virtual Machine Manager window Click on the Create a new virtual machine button to start the new virtualized guest wizard. Figure 7.2. The Create a new virtual machine button The New VM window opens. Name the virtual machine Virtual machine names can contain letters, numbers and the following characters: ' _ ', ' . ' and ' - '. Virtual machine names must be unique for migration and cannot consist only of numbers. Choose the Local install media (ISO image or CDROM) radio button. Figure 7.3. The New VM window - Step 1 Click Forward to continue. Select the installation media Select the appropriate radio button for your installation media. Figure 7.4. Locate your install media If you wish to install from a CD-ROM or DVD, select the Use CDROM or DVD radio button, and select the appropriate disk drive from the drop-down list of drives available. If you wish to install from an ISO image, select Use ISO image , and then click the Browse... button to open the Locate media volume window. Select the installation image you wish to use, and click Choose Volume . If no images are displayed in the Locate media volume window, click on the Browse Local button to browse the host machine for the installation image or DVD drive containing the installation disk. Select the installation image or DVD drive containing the installation disk and click Open ; the volume is selected for use and you are returned to the Create a new virtual machine wizard. Important For ISO image files and guest storage images, the recommended location to use is /var/lib/libvirt/images/ . Any other location may require additional configuration by SELinux. Refer to the Red Hat Enterprise Linux 6 Virtualization Administration Guide for more details on configuring SELinux. Select the operating system type and version which match the installation media you have selected. Figure 7.5. The New VM window - Step 2 Click Forward to continue. Set RAM and virtual CPUs Choose appropriate values for the virtual CPUs and RAM allocation. These values affect the host's and guest's performance. Memory and virtual CPUs can be overcommitted. For more information on overcommitting, refer to the Red Hat Enterprise Linux 6 Virtualization Administration Guide . Virtual machines require sufficient physical memory (RAM) to run efficiently and effectively. Red Hat supports a minimum of 512MB of RAM for a virtual machine. Red Hat recommends at least 1024MB of RAM for each logical core. Assign sufficient virtual CPUs for the virtual machine. If the virtual machine runs a multithreaded application, assign the number of virtual CPUs the guest virtual machine will require to run efficiently. You cannot assign more virtual CPUs than there are physical processors (or hyper-threads) available on the host system. The number of virtual CPUs available is noted in the Up to X available field. Figure 7.6. The new VM window - Step 3 Click Forward to continue. Enable and assign storage Enable and assign storage for the Red Hat Enterprise Linux 6 guest virtual machine. Assign at least 5GB for a desktop installation or at least 1GB for a minimal installation. Note Live and offline migrations require virtual machines to be installed on shared network storage. For information on setting up shared storage for virtual machines, refer to the Red Hat Enterprise Linux Virtualization Administration Guide . With the default local storage Select the Create a disk image on the computer's hard drive radio button to create a file-based image in the default storage pool, the /var/lib/libvirt/images/ directory. Enter the size of the disk image to be created. If the Allocate entire disk now check box is selected, a disk image of the size specified will be created immediately. If not, the disk image will grow as it becomes filled. Note Although the storage pool is a virtual container it is limited by two factors: maximum size allowed to it by qemu-kvm and the size of the disk on the host physical machine. Storage pools may not exceed the size of the disk on the host physical machine. The maximum sizes are as follows: virtio-blk = 2^63 bytes or 8 Exabytes(using raw files or disk) Ext4 = ~ 16 TB (using 4 KB block size) XFS = ~8 Exabytes qcow2 and host file systems keep their own metadata and scalability should be evaluated/tuned when trying very large image sizes. Using raw disks means fewer layers that could affect scalability or max size. Figure 7.7. The New VM window - Step 4 Click Forward to create a disk image on the local hard drive. Alternatively, select Select managed or other existing storage , then select Browse to configure managed storage. With a storage pool If you selected Select managed or other existing storage in the step to use a storage pool and clicked Browse , the Locate or create storage volume window will appear. Figure 7.8. The Locate or create storage volume window Select a storage pool from the Storage Pools list. Optional: Click on the New Volume button to create a new storage volume. The Add a Storage Volume screen will appear. Enter the name of the new storage volume. Choose a format option from the Format drop-down menu. Format options include raw, cow, qcow, qcow2, qed, vmdk, and vpc. Adjust other fields as desired. Figure 7.9. The Add a Storage Volume window Click Finish to continue. Verify and finish Verify there were no errors made during the wizard and everything appears as expected. Select the Customize configuration before install check box to change the guest's storage or network devices, to use the paravirtualized drivers or to add additional devices. Click on the Advanced options down arrow to inspect and modify advanced options. For a standard Red Hat Enterprise Linux 6 installation, none of these options require modification. Figure 7.10. The New VM window - local storage Click Finish to continue into the Red Hat Enterprise Linux installation sequence. For more information on installing Red Hat Enterprise Linux 6 refer to the Red Hat Enterprise Linux 6 Installation Guide . A Red Hat Enterprise Linux 6 guest virtual machine is now created from an ISO installation disc image. After the installation completes, you can connect to the guest operating system. For more information, see Section 6.5, "Connecting to Virtual Machines" | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/chap-Virtualization_Host_Configuration_and_Guest_Installation_Guide-RHEL6_Install |
Storage | Storage OpenShift Container Platform 4.9 Configuring and managing storage in OpenShift Container Platform Red Hat OpenShift Documentation Team | [
"df -h /var/lib",
"Filesystem Size Used Avail Use% Mounted on /dev/sda1 69G 32G 34G 49% /",
"oc delete pv <pv-name>",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s",
"oc patch pv <your-pv-name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 status:",
"oc get pv <pv-claim>",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status:",
"kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3",
"apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: [\"50060e801049cfd1\"] lun: 0 readOnly: false",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi",
"apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3",
"oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \\ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2",
"apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false",
"apiVersion: \"v1\" kind: \"PersistentVolumeClaim\" metadata: name: \"claim1\" 1 spec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: \"5Gi\" 2 storageClassName: azure-file-sc 3 volumeName: \"pv0001\" 4",
"apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: volumeMounts: - mountPath: \"/data\" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3",
"apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" cinder: 3 fsType: \"ext3\" 4 volumeID: \"f37a03aa-6212-4c62-a805-9ce139fab180\" 5",
"oc create -f cinder-persistentvolume.yaml",
"oc create serviceaccount <service_account>",
"oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4",
"{ \"fooServer\": \"192.168.0.1:1234\", 1 \"fooVolumeName\": \"bar\", \"kubernetes.io/fsType\": \"ext4\", 2 \"kubernetes.io/readwrite\": \"ro\", 3 \"kubernetes.io/secret/<key name>\": \"<key value>\", 4 \"kubernetes.io/secret/<another key name>\": \"<another key value>\", }",
"{ \"status\": \"<Success/Failure/Not supported>\", \"message\": \"<Reason for success/failure>\" }",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: \"ext4\" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar",
"\"fsType\":\"<FS type>\", \"readwrite\":\"<rw>\", \"secret/key1\":\"<secret1>\" \"secret/keyN\":\"<secretN>\"",
"apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi8/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: ''",
"apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: \"/mnt/data\" 4",
"oc create -f pv.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual",
"oc create -f pvc.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4'",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false",
"oc adm new-project openshift-local-storage",
"oc annotate project openshift-local-storage openshift.io/node-selector=''",
"OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: \"USD{OC_VERSION}\" installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc apply -f openshift-local-storage.yaml",
"oc -n openshift-local-storage get pods",
"NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m",
"oc get csvs -n openshift-local-storage",
"NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: \"local-sc\" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: \"localblock-sc\" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6",
"oc create -f <local-volume>.yaml",
"oc get all -n openshift-local-storage",
"NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m",
"apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node",
"apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-storage 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node",
"oc create -f <example-pv>.yaml",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-storage 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-storage 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-storage 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-storage 12h",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4",
"oc create -f <local-pvc>.yaml",
"apiVersion: v1 kind: Pod spec: containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: localpvc persistentVolumeClaim: claimName: local-pvc-name 3",
"oc create -f <local-pod>.yaml",
"apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: example-storageclass 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM",
"oc apply -f local-volume-set.yaml",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available example-storageclass 88m local-pv-2ef7cd2a 100Gi RWO Delete Available example-storageclass 82m local-pv-3fa1c73 100Gi RWO Delete Available example-storageclass 48m",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: \"localstorage\" 3 storageClassDevices: - storageClassName: \"localblock-sc\" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg",
"spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists",
"oc edit localvolume <name> -n openshift-local-storage",
"oc delete pv <pv-name>",
"oc debug node/<node-name>",
"chroot /host",
"cd /mnt/openshift-local-storage/<sc-name> 1",
"rm <symlink>",
"oc delete localvolume --all --all-namespaces oc delete localvolumeset --all --all-namespaces oc delete localvolumediscovery --all --all-namespaces",
"oc delete pv <pv-name>",
"oc delete project openshift-local-storage",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7",
"oc get pv",
"NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: \"\"",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m",
"ls -lZ /opt/nfs -d",
"drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs",
"id nfsnobody",
"uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)",
"spec: containers: - name: securityContext: 1 supplementalGroups: [5555] 2",
"spec: containers: 1 - name: securityContext: runAsUser: 65534 2",
"setsebool -P virt_use_nfs 1",
"/<example_fs> *(rw,root_squash)",
"iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT",
"apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"",
"apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"",
"echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3",
"oc create -f pvc.yaml",
"vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk",
"shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: \"[datastore1] volumes/myDisk\" 4 fsType: ext4 5",
"oc create -f pv1.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: \"1Gi\" 3 volumeName: pv1 4",
"oc create -f pvc1.yaml",
"oc create -f - << EOF apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class> 1 annotations: storageclass.kubernetes.io/is-default-class: \"true\" provisioner: <provisioner-name> 2 parameters: EOF",
"oc new-app mysql-persistent",
"--> Deploying template \"openshift/mysql-persistent\" to project default",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql Bound kubernetes-dynamic-pv-3271ffcb4e1811e8 1Gi RWO cinder 3s",
"kind: Pod apiVersion: v1 metadata: name: my-csi-app spec: containers: - name: my-frontend image: busybox volumeMounts: - mountPath: \"/data\" name: my-csi-inline-vol command: [ \"sleep\", \"1000000\" ] volumes: 1 - name: my-csi-inline-vol csi: driver: inline.storage.kubernetes.io volumeAttributes: foo: bar",
"oc create -f my-csi-app.yaml",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io 1 deletionPolicy: Delete",
"oc create -f volumesnapshotclass.yaml",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: volumeSnapshotClassName: csi-hostpath-snap 1 source: persistentVolumeClaimName: myclaim 2",
"oc create -f volumesnapshot-dynamic.yaml",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: snapshot-demo spec: source: volumeSnapshotContentName: mycontent 1",
"oc create -f volumesnapshot-manual.yaml",
"oc describe volumesnapshot mysnap",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: mysnap spec: source: persistentVolumeClaimName: myclaim volumeSnapshotClassName: csi-hostpath-snap status: boundVolumeSnapshotContentName: snapcontent-1af4989e-a365-4286-96f8-d5dcd65d78d6 1 creationTime: \"2020-01-29T12:24:30Z\" 2 readyToUse: true 3 restoreSize: 500Mi",
"oc get volumesnapshotcontent",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: csi-hostpath-snap driver: hostpath.csi.k8s.io deletionPolicy: Delete 1",
"oc delete volumesnapshot <volumesnapshot_name>",
"volumesnapshot.snapshot.storage.k8s.io \"mysnapshot\" deleted",
"oc delete volumesnapshotcontent <volumesnapshotcontent_name>",
"oc patch -n USDPROJECT volumesnapshot/USDNAME --type=merge -p '{\"metadata\": {\"finalizers\":null}}'",
"volumesnapshotclass.snapshot.storage.k8s.io \"csi-ocs-rbd-snapclass\" deleted",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim-restore spec: storageClassName: csi-hostpath-sc dataSource: name: mysnap 1 kind: VolumeSnapshot 2 apiGroup: snapshot.storage.k8s.io 3 accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"oc create -f pvc-restore.yaml",
"oc get pvc",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-1-clone namespace: mynamespace spec: storageClassName: csi-cloning 1 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi dataSource: kind: PersistentVolumeClaim name: pvc-1",
"oc create -f pvc-clone.yaml",
"oc get pvc pvc-1-clone",
"kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: pvc-1-clone 1",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: CustomNoUpgrade customNoUpgrade: enabled: - CSIMigrationAWS 1",
"apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: efs.csi.aws.com spec: managementState: Managed",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: efs-sc provisioner: efs.csi.aws.com parameters: provisioningMode: efs-ap 1 fileSystemId: fs-a5324911 2 directoryPerms: \"700\" 3 gidRangeStart: \"1000\" 4 gidRangeEnd: \"2000\" 5 basePath: \"/dynamic_provisioning\" 6",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: test spec: storageClassName: efs-sc accessModes: - ReadWriteMany resources: requests: storage: 5Gi",
"apiVersion: v1 kind: PersistentVolume metadata: name: efs-pv spec: capacity: 1 storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany - ReadWriteOnce persistentVolumeReclaimPolicy: Retain csi: driver: efs.csi.aws.com volumeHandle: fs-ae66151a 2 volumeAttributes: encryptInTransit: \"false\" 3",
"oc adm must-gather [must-gather ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 [must-gather ] OUT namespace/openshift-must-gather-xm4wq created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created",
"oc get clustercsidriver efs.csi.aws.com -o yaml",
"oc describe pod Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m13s default-scheduler Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal Warning FailedMount 13s kubelet MountVolume.SetUp failed for volume \"pvc-d7c097e6-67ec-4fae-b968-7e7056796449\" : rpc error: code = DeadlineExceeded desc = context deadline exceeded 1 Warning FailedMount 10s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition",
"oc get co storage",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE storage 4.9.0-0.nightly-2021-09-08-162532 True False False 4h26m",
"oc get pod -n openshift-cluster-csi-drivers",
"NAME READY STATUS RESTARTS AGE azure-disk-csi-driver-controller-5949bf45fd-pm4qb 11/11 Running 0 39m azure-disk-csi-driver-node-2tcxr 3/3 Running 0 53m azure-disk-csi-driver-node-2xjzm 3/3 Running 0 53m azure-disk-csi-driver-node-6wrgk 3/3 Running 0 53m azure-disk-csi-driver-node-frvx2 3/3 Running 0 53m azure-disk-csi-driver-node-lf5kb 3/3 Running 0 53m azure-disk-csi-driver-node-mqdhh 3/3 Running 0 53m azure-disk-csi-driver-operator-7d966fc6c5-x74x5 1/1 Running 0 44m",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-premium (default) kubernetes.io/azure-disk Delete WaitForFirstConsumer true 76m managed-csi disk.csi.azure.com Delete WaitForFirstConsumer true 51m 1",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-gce-pd-cmek provisioner: pd.csi.storage.gke.io volumeBindingMode: \"WaitForFirstConsumer\" allowVolumeExpansion: true parameters: type: pd-standard disk-encryption-kms-key: projects/<key-project-id>/locations/<location>/keyRings/<key-ring>/cryptoKeys/<key> 1",
"oc describe storageclass csi-gce-pd-cmek",
"Name: csi-gce-pd-cmek IsDefaultClass: No Annotations: None Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/key-project-id/locations/location/keyRings/ring-name/cryptoKeys/key-name,type=pd-standard AllowVolumeExpansion: true MountOptions: none ReclaimPolicy: Delete VolumeBindingMode: WaitForFirstConsumer Events: none",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: podpvc spec: accessModes: - ReadWriteOnce storageClassName: csi-gce-pd-cmek resources: requests: storage: 6Gi",
"oc apply -f pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE podpvc Bound pvc-e36abf50-84f3-11e8-8538-42010a800002 10Gi RWO csi-gce-pd-cmek 9s",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h standard-csi kubernetes.io/cinder Delete WaitForFirstConsumer true 46h",
"oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'",
"oc patch storageclass standard-csi -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE standard kubernetes.io/cinder Delete WaitForFirstConsumer true 46h standard-csi(default) cinder.csi.openstack.org Delete WaitForFirstConsumer true 46h",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cinder-claim spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi",
"oc create -f cinder-claim.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-manila spec: accessModes: 1 - ReadWriteMany resources: requests: storage: 10Gi storageClassName: csi-manila-gold 2",
"oc create -f pvc-manila.yaml",
"oc get pvc pvc-manila",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage_class_name> 1 annotations: storageclass.kubernetes.io/is-default-class: \"<boolean>\" 2 provisioner: csi.ovirt.org allowVolumeExpansion: <boolean> 3 reclaimPolicy: Delete 4 volumeBindingMode: Immediate 5 parameters: storageDomainName: <rhv-storage-domain-name> 6 thinProvisioning: \"<boolean>\" 7 csi.storage.k8s.io/fstype: <file_system_type> 8",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-ovirt spec: storageClassName: ovirt-csi-sc 1 accessModes: - ReadWriteOnce resources: requests: storage: <volume size> 2 volumeMode: <volume mode> 3",
"oc create -f pvc-ovirt.yaml",
"oc get pvc pvc-ovirt",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: thin-csi provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: \"USDopenshift-storage-policy-xxxx\" volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: false reclaimPolicy: Delete",
"oc get co storage",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE storage 4.9.0-0.nightly-2021-09-08-162532 True False False 4h26m",
"oc get pod -n openshift-cluster-csi-drivers",
"NAME READY STATUS RESTARTS AGE vmware-vsphere-csi-driver-controller-5646dbbf54-cnsx7 9/9 Running 0 4h29m vmware-vsphere-csi-driver-node-ch22q 3/3 Running 0 4h37m vmware-vsphere-csi-driver-node-gfjrb 3/3 Running 0 4h37m vmware-vsphere-csi-driver-node-ktlmp 3/3 Running 0 4h37m vmware-vsphere-csi-driver-node-lgksl 3/3 Running 0 4h37m vmware-vsphere-csi-driver-node-vb4gv 3/3 Running 0 4h37m vmware-vsphere-csi-driver-operator-7c7fc474c-p544t 1/1 Running 0 4h29m",
"NAME READY STATUS RESTARTS AGE azure-disk-csi-driver-controller-5949bf45fd-pm4qb 11/11 Running 0 39m azure-disk-csi-driver-node-2tcxr 3/3 Running 0 53m azure-disk-csi-driver-node-2xjzm 3/3 Running 0 53m azure-disk-csi-driver-node-6wrgk 3/3 Running 0 53m azure-disk-csi-driver-node-frvx2 3/3 Running 0 53m azure-disk-csi-driver-node-lf5kb 3/3 Running 0 53m azure-disk-csi-driver-node-mqdhh 3/3 Running 0 53m azure-disk-csi-driver-operator-7d966fc6c5-x74x5 1/1 Running 0 44m",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE thin (default) kubernetes.io/vsphere-volume Delete Immediate false 5h43m thin-csi csi.vsphere.vmware.com Delete WaitForFirstConsumer false 4h38m 1",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-premium (default) kubernetes.io/azure-disk Delete WaitForFirstConsumer true 76m managed-csi disk.csi.azure.com Delete WaitForFirstConsumer true 51m 1",
"oc edit storageclass <storage_class_name> 1",
"apiVersion: storage.k8s.io/v1 kind: StorageClass parameters: type: gp2 reclaimPolicy: Delete allowVolumeExpansion: true 1",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ebs spec: storageClass: \"storageClassWithFlagSet\" accessModes: - ReadWriteOnce resources: requests: storage: 8Gi 1",
"oc describe pvc <pvc_name>",
"kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2",
"storageclass.kubernetes.io/is-default-class: \"true\"",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"",
"kubernetes.io/description: My Storage Class Description",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']",
"oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete",
"kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/vsphere-volume 2 parameters: diskformat: thin 3",
"oc get storageclass",
"NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs",
"oc patch storageclass gp2 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'",
"oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'",
"oc get storageclass",
"NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/storage/index |
Chapter 33. Security | Chapter 33. Security CardOS 5.3 smart cards with ECDSA support work correctly in OpenSC Previously, OpenSC did not correctly parse the ECDSA algorithm in the TokenInfo information provided by CardOS 5.3 smart cards. As a consequence, OpenSC did not detect these cards. The TokenInfo parser has been updated and now complies with the PKCS #15 specification. As a result, CardOS 5.3 smart cards with ECDSA support work correctly in OpenSC. (BZ# 1562277 ) Non-CCID-compliant smart card readers work in OpenSC Certain smart card readers implement PIN pad functionality that does not follow the chip card interface device (CCID) specification. Previously, OpenSC detected the PIN pad of such smart card readers, but the reader could not be used with OpenSC. With this update, the PIN pad detection has been disabled in OpenSC by default. As a result, non-CCID-compliant smart card readers can be used, but without the PIN pad feature. (BZ#1547117) The pkcs11-tool utility now supports mechanism IDs and handles ECDSA keys correctly Previously, the pkcs11-tool utility incorrectly handled EC_POINT values and support for certain vendor-specific mechanisms was missing. As a consequence, these mechanisms and certain ECDSA keys in hardware security modules (HSM) and smart cards were not supported by pkcs11-tool . With this update, the pkcs11-tool now handles EC_POINT values and vendor-specific mechanisms correctly. As a result, the utility now supports mechanism IDs and handles ECDSA keys correctly. (BZ# 1562572 ) OpenSCAP RPM verification rules no longer work incorrectly with VM and container file systems Previously, the rpminfo , rpmverify , and rpmverifyfile probes did not fully support offline mode. As a consequence, OpenSCAP RPM verification rules did not work correctly when scanning virtual machine (VM) and container file systems in offline mode. With this update, support for offline mode has been fixed, and results of scanning VM and container file systems in offline mode no longer contain false negatives. (BZ# 1556988 ) sudo no longer blocks poll() for /dev/ptmx Previously, when running a command through sudo that had the I/O logging enabled, a parent process of the command was occasionally blocked in the poll() function execution, waiting for an event on the /dev/ptmx file descriptor. Consequently, a deadlock occurred and sudo might leave the process of the command in an unresponsive state. This update adds a pseudoterminal cleanup logic, and sudo no longer causes a deadlock in the described scenario. (BZ# 1560657 ) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.6_release_notes/bug_fixes_security |
Chapter 2. Alertmanager [monitoring.coreos.com/v1] | Chapter 2. Alertmanager [monitoring.coreos.com/v1] Description Alertmanager describes an Alertmanager cluster. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the Alertmanager cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status object Most recent observed status of the Alertmanager cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 2.1.1. .spec Description Specification of the desired behavior of the Alertmanager cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description additionalPeers array (string) AdditionalPeers allows injecting a set of additional Alertmanagers to peer with to form a highly available cluster. affinity object If specified, the pod's scheduling constraints. alertmanagerConfigMatcherStrategy object The AlertmanagerConfigMatcherStrategy defines how AlertmanagerConfig objects match the alerts. In the future more options may be added. alertmanagerConfigNamespaceSelector object Namespaces to be selected for AlertmanagerConfig discovery. If nil, only check own namespace. alertmanagerConfigSelector object AlertmanagerConfigs to be selected for to merge and configure Alertmanager with. alertmanagerConfiguration object EXPERIMENTAL: alertmanagerConfiguration specifies the configuration of Alertmanager. If defined, it takes precedence over the configSecret field. This field may change in future releases. automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted in the pod. If the service account has automountServiceAccountToken: true , set the field to false to opt out of automounting API credentials. baseImage string Base image that is used to deploy pods, without tag. Deprecated: use 'image' instead clusterAdvertiseAddress string ClusterAdvertiseAddress is the explicit address to advertise in cluster. Needs to be provided for non RFC1918 [1] (public) addresses. [1] RFC1918: https://tools.ietf.org/html/rfc1918 clusterGossipInterval string Interval between gossip attempts. clusterPeerTimeout string Timeout for cluster peering. clusterPushpullInterval string Interval between pushpull attempts. configMaps array (string) ConfigMaps is a list of ConfigMaps in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. Each ConfigMap is added to the StatefulSet definition as a volume named configmap-<configmap-name> . The ConfigMaps are mounted into /etc/alertmanager/configmaps/<configmap-name> in the 'alertmanager' container. configSecret string ConfigSecret is the name of a Kubernetes Secret in the same namespace as the Alertmanager object, which contains the configuration for this Alertmanager instance. If empty, it defaults to alertmanager-<alertmanager-name> . The Alertmanager configuration should be available under the alertmanager.yaml key. Additional keys from the original secret are copied to the generated secret and mounted into the /etc/alertmanager/config directory in the alertmanager container. If either the secret or the alertmanager.yaml key is missing, the operator provisions a minimal Alertmanager configuration with one empty receiver (effectively dropping alert notifications). containers array Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to an Alertmanager pod. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: alertmanager and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. containers[] object A single application container that you want to run within a pod. externalUrl string The external URL the Alertmanager instances will be available under. This is necessary to generate correct URLs. This is necessary if Alertmanager is not served from root of a DNS name. forceEnableClusterMode boolean ForceEnableClusterMode ensures Alertmanager does not deactivate the cluster mode when running with a single replica. Use case is e.g. spanning an Alertmanager cluster across Kubernetes clusters with a single replica in each. hostAliases array Pods' hostAliases configuration hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. image string Image if specified has precedence over baseImage, tag and sha combinations. Specifying the version is still necessary to ensure the Prometheus Operator knows what version of Alertmanager is being configured. imagePullPolicy string Image pull policy for the 'alertmanager', 'init-config-reloader' and 'config-reloader' containers. See https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy for more details. imagePullSecrets array An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Alertmanager configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The current init container name is: init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. initContainers[] object A single application container that you want to run within a pod. listenLocal boolean ListenLocal makes the Alertmanager server listen on loopback, so that it does not bind against the Pod IP. Note this is only for the Alertmanager UI, not the gossip communication. logFormat string Log format for Alertmanager to be configured with. logLevel string Log level for Alertmanager to be configured with. minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field from kubernetes 1.22 until 1.24 which requires enabling the StatefulSetMinReadySeconds feature gate. nodeSelector object (string) Define which Nodes the Pods are scheduled on. paused boolean If set to true all actions on the underlying managed objects are not goint to be performed, except for delete actions. podMetadata object PodMetadata configures labels and annotations which are propagated to the Alertmanager pods. The following items are reserved and cannot be overridden: * "alertmanager" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/instance" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/name" label, set to "alertmanager". * "app.kubernetes.io/version" label, set to the Alertmanager version. * "kubectl.kubernetes.io/default-container" annotation, set to "alertmanager". portName string Port name used for the pods and governing service. Defaults to web . priorityClassName string Priority class assigned to the Pods replicas integer Size is the expected size of the alertmanager cluster. The controller will eventually make the size of the running cluster equal to the expected size. resources object Define resources requests and limits for single Pods. retention string Time duration Alertmanager shall retain data for. Default is '120h', and must match the regular expression [0-9]+(ms|s|m|h) (milliseconds seconds minutes hours). routePrefix string The route prefix Alertmanager registers HTTP handlers for. This is useful, if using ExternalURL and a proxy is rewriting HTTP routes of a request, and the actual ExternalURL is still true, but the server serves requests under a different route prefix. For example for use with kubectl proxy . secrets array (string) Secrets is a list of Secrets in the same namespace as the Alertmanager object, which shall be mounted into the Alertmanager Pods. Each Secret is added to the StatefulSet definition as a volume named secret-<secret-name> . The Secrets are mounted into /etc/alertmanager/secrets/<secret-name> in the 'alertmanager' container. securityContext object SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run the Prometheus Pods. sha string SHA of Alertmanager container image to be deployed. Defaults to the value of version . Similar to a tag, but the SHA explicitly deploys an immutable container image. Version and Tag are ignored if SHA is set. Deprecated: use 'image' instead. The image digest can be specified as part of the image URL. storage object Storage is the definition of how storage will be used by the Alertmanager instances. tag string Tag of Alertmanager container image to be deployed. Defaults to the value of version . Version is ignored if Tag is set. Deprecated: use 'image' instead. The image tag can be specified as part of the image URL. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array If specified, the pod's topology spread constraints. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. version string Version the cluster should be on. volumeMounts array VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the alertmanager container, that are generated as a result of StorageSpec objects. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. web object Defines the web command line flags when starting Alertmanager. 2.1.2. .spec.affinity Description If specified, the pod's scheduling constraints. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 2.1.3. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 2.1.4. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 2.1.5. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 2.1.6. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.7. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.8. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 2.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.11. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 2.1.12. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 2.1.13. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 2.1.14. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 2.1.15. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 2.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 2.1.18. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.19. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.20. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.21. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.22. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.23. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.28. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.29. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.30. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.31. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.32. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.36. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 2.1.37. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 2.1.38. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 2.1.39. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.40. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.41. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.46. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 2.1.47. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 2.1.48. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.49. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.50. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.54. .spec.alertmanagerConfigMatcherStrategy Description The AlertmanagerConfigMatcherStrategy defines how AlertmanagerConfig objects match the alerts. In the future more options may be added. Type object Property Type Description type string If set to OnNamespace , the operator injects a label matcher matching the namespace of the AlertmanagerConfig object for all its routes and inhibition rules. None will not add any additional matchers other than the ones specified in the AlertmanagerConfig. Default is OnNamespace . 2.1.55. .spec.alertmanagerConfigNamespaceSelector Description Namespaces to be selected for AlertmanagerConfig discovery. If nil, only check own namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.56. .spec.alertmanagerConfigNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.57. .spec.alertmanagerConfigNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.58. .spec.alertmanagerConfigSelector Description AlertmanagerConfigs to be selected for to merge and configure Alertmanager with. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.59. .spec.alertmanagerConfigSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.60. .spec.alertmanagerConfigSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.61. .spec.alertmanagerConfiguration Description EXPERIMENTAL: alertmanagerConfiguration specifies the configuration of Alertmanager. If defined, it takes precedence over the configSecret field. This field may change in future releases. Type object Property Type Description global object Defines the global parameters of the Alertmanager configuration. name string The name of the AlertmanagerConfig resource which is used to generate the Alertmanager configuration. It must be defined in the same namespace as the Alertmanager object. The operator will not enforce a namespace label for routes and inhibition rules. templates array Custom notification templates. templates[] object SecretOrConfigMap allows to specify data as a Secret or ConfigMap. Fields are mutually exclusive. 2.1.62. .spec.alertmanagerConfiguration.global Description Defines the global parameters of the Alertmanager configuration. Type object Property Type Description httpConfig object HTTP client configuration. opsGenieApiKey object The default OpsGenie API Key. opsGenieApiUrl object The default OpsGenie API URL. pagerdutyUrl string The default Pagerduty URL. resolveTimeout string ResolveTimeout is the default value used by alertmanager if the alert does not include EndsAt, after this time passes it can declare the alert as resolved if it has not been updated. This has no impact on alerts from Prometheus, as they always include EndsAt. slackApiUrl object The default Slack API URL. smtp object Configures global SMTP parameters. 2.1.63. .spec.alertmanagerConfiguration.global.httpConfig Description HTTP client configuration. Type object Property Type Description authorization object Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. basicAuth object BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. bearerTokenSecret object The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the Alertmanager object and accessible by the Prometheus Operator. followRedirects boolean FollowRedirects specifies whether the client should follow HTTP 3xx redirects. oauth2 object OAuth2 client credentials used to fetch a token for the targets. proxyURL string Optional proxy URL. tlsConfig object TLS configuration for the client. 2.1.64. .spec.alertmanagerConfiguration.global.httpConfig.authorization Description Authorization header configuration for the client. This is mutually exclusive with BasicAuth and is only available starting from Alertmanager v0.22+. Type object Property Type Description credentials object Selects a key of a Secret in the namespace that contains the credentials for authentication. type string Defines the authentication type. The value is case-insensitive. "Basic" is not a supported value. Default: "Bearer" 2.1.65. .spec.alertmanagerConfiguration.global.httpConfig.authorization.credentials Description Selects a key of a Secret in the namespace that contains the credentials for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.66. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth Description BasicAuth for the client. This is mutually exclusive with Authorization. If both are defined, BasicAuth takes precedence. Type object Property Type Description password object password specifies a key of a Secret containing the password for authentication. username object username specifies a key of a Secret containing the username for authentication. 2.1.67. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth.password Description password specifies a key of a Secret containing the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.68. .spec.alertmanagerConfiguration.global.httpConfig.basicAuth.username Description username specifies a key of a Secret containing the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.69. .spec.alertmanagerConfiguration.global.httpConfig.bearerTokenSecret Description The secret's key that contains the bearer token to be used by the client for authentication. The secret needs to be in the same namespace as the Alertmanager object and accessible by the Prometheus Operator. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.70. .spec.alertmanagerConfiguration.global.httpConfig.oauth2 Description OAuth2 client credentials used to fetch a token for the targets. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. clientSecret object clientSecret specifies a key of a Secret containing the OAuth2 client's secret. endpointParams object (string) endpointParams configures the HTTP parameters to append to the token URL. scopes array (string) scopes defines the OAuth2 scopes used for the token request. tokenUrl string tokenURL configures the URL to fetch the token from. 2.1.71. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId Description clientId specifies a key of a Secret or ConfigMap containing the OAuth2 client's ID. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.72. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.73. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.74. .spec.alertmanagerConfiguration.global.httpConfig.oauth2.clientSecret Description clientSecret specifies a key of a Secret containing the OAuth2 client's secret. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.75. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig Description TLS configuration for the client. Type object Property Type Description ca object Certificate authority used when verifying server certificates. cert object Client certificate to present when doing client-authentication. insecureSkipVerify boolean Disable target certificate validation. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 2.1.76. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca Description Certificate authority used when verifying server certificates. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.77. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.78. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.79. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert Description Client certificate to present when doing client-authentication. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.80. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.81. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.82. .spec.alertmanagerConfiguration.global.httpConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.83. .spec.alertmanagerConfiguration.global.opsGenieApiKey Description The default OpsGenie API Key. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.84. .spec.alertmanagerConfiguration.global.opsGenieApiUrl Description The default OpsGenie API URL. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.85. .spec.alertmanagerConfiguration.global.slackApiUrl Description The default Slack API URL. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.86. .spec.alertmanagerConfiguration.global.smtp Description Configures global SMTP parameters. Type object Property Type Description authIdentity string SMTP Auth using PLAIN authPassword object SMTP Auth using LOGIN and PLAIN. authSecret object SMTP Auth using CRAM-MD5. authUsername string SMTP Auth using CRAM-MD5, LOGIN and PLAIN. If empty, Alertmanager doesn't authenticate to the SMTP server. from string The default SMTP From header field. hello string The default hostname to identify to the SMTP server. requireTLS boolean The default SMTP TLS requirement. Note that Go does not support unencrypted connections to remote SMTP endpoints. smartHost object The default SMTP smarthost used for sending emails. 2.1.87. .spec.alertmanagerConfiguration.global.smtp.authPassword Description SMTP Auth using LOGIN and PLAIN. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.88. .spec.alertmanagerConfiguration.global.smtp.authSecret Description SMTP Auth using CRAM-MD5. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.89. .spec.alertmanagerConfiguration.global.smtp.smartHost Description The default SMTP smarthost used for sending emails. Type object Required host port Property Type Description host string Defines the host's address, it can be a DNS name or a literal IP address. port string Defines the host's port, it can be a literal port number or a port name. 2.1.90. .spec.alertmanagerConfiguration.templates Description Custom notification templates. Type array 2.1.91. .spec.alertmanagerConfiguration.templates[] Description SecretOrConfigMap allows to specify data as a Secret or ConfigMap. Fields are mutually exclusive. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.92. .spec.alertmanagerConfiguration.templates[].configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.93. .spec.alertmanagerConfiguration.templates[].secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.94. .spec.containers Description Containers allows injecting additional containers. This is meant to allow adding an authentication proxy to an Alertmanager pod. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: alertmanager and config-reloader . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 2.1.95. .spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 2.1.96. .spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 2.1.97. .spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 2.1.98. .spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 2.1.99. .spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.100. .spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.101. .spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.102. .spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.103. .spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 2.1.104. .spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 2.1.105. .spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 2.1.106. .spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 2.1.107. .spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 2.1.108. .spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.109. .spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.110. .spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.111. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.112. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.113. .spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.114. .spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.115. .spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.116. .spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.117. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.118. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.119. .spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.120. .spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.121. .spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.122. .spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.123. .spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.124. .spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.125. .spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.126. .spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.127. .spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 2.1.128. .spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 2.1.129. .spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.130. .spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.131. .spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.132. .spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.133. .spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.134. .spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.135. .spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.136. .spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 2.1.137. .spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 2.1.138. .spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.139. .spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.140. .spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 2.1.141. .spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.142. .spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 2.1.143. .spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.144. .spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.145. .spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.146. .spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.147. .spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.148. .spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.149. .spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.150. .spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.151. .spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.152. .spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.153. .spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 2.1.154. .spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 2.1.155. .spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 2.1.156. .spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.157. .spec.hostAliases Description Pods' hostAliases configuration Type array 2.1.158. .spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Required hostnames ip Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 2.1.159. .spec.imagePullSecrets Description An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod Type array 2.1.160. .spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.161. .spec.initContainers Description InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Alertmanager configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The current init container name is: init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 2.1.162. .spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ restartPolicy string RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod's restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the init container. Instead, the init container starts immediately after this init container is started, or after any startupProbe has successfully completed. securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 2.1.163. .spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 2.1.164. .spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 2.1.165. .spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 2.1.166. .spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.167. .spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.168. .spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.169. .spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.170. .spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 2.1.171. .spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 2.1.172. .spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 2.1.173. .spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 2.1.174. .spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 2.1.175. .spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.176. .spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.177. .spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.178. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.179. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.180. .spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.181. .spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 2.1.182. .spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.183. .spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.184. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.185. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.186. .spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.187. .spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.188. .spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.189. .spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.190. .spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.191. .spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.192. .spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.193. .spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.194. .spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 2.1.195. .spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 2.1.196. .spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.197. .spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.198. .spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.199. .spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.200. .spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.201. .spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.202. .spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.203. .spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 2.1.204. .spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 2.1.205. .spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.206. .spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.207. .spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 2.1.208. .spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.209. .spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 2.1.210. .spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.211. .spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.212. .spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.213. .spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 2.1.214. .spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 2.1.215. .spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 2.1.216. .spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 2.1.217. .spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 2.1.218. .spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 2.1.219. .spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 2.1.220. .spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 2.1.221. .spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 2.1.222. .spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 2.1.223. .spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.224. .spec.podMetadata Description PodMetadata configures labels and annotations which are propagated to the Alertmanager pods. The following items are reserved and cannot be overridden: * "alertmanager" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/instance" label, set to the name of the Alertmanager instance. * "app.kubernetes.io/managed-by" label, set to "prometheus-operator". * "app.kubernetes.io/name" label, set to "alertmanager". * "app.kubernetes.io/version" label, set to the Alertmanager version. * "kubectl.kubernetes.io/default-container" annotation, set to "alertmanager". Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 2.1.225. .spec.resources Description Define resources requests and limits for single Pods. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.226. .spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.227. .spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 2.1.228. .spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 2.1.229. .spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 2.1.230. .spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 2.1.231. .spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 2.1.232. .spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 2.1.233. .spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 2.1.234. .spec.storage Description Storage is the definition of how storage will be used by the Alertmanager instances. Type object Property Type Description disableMountSubPath boolean Deprecated: subPath usage will be removed in a future release. emptyDir object EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ephemeral object EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes volumeClaimTemplate object Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. 2.1.235. .spec.storage.emptyDir Description EmptyDirVolumeSource to be used by the StatefulSet. If specified, it takes precedence over ephemeral and volumeClaimTemplate . More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 2.1.236. .spec.storage.ephemeral Description EphemeralVolumeSource to be used by the StatefulSet. This is a beta field in k8s 1.21 and GA in 1.15. For lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 2.1.237. .spec.storage.ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 2.1.238. .spec.storage.ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 2.1.239. .spec.storage.ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.240. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.241. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 2.1.242. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.243. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.244. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 2.1.245. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.246. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.247. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.248. .spec.storage.volumeClaimTemplate Description Defines the PVC spec to be used by the Prometheus StatefulSets. The easiest way to use a volume that cannot be automatically provisioned is to use a label selector alongside manually created PersistentVolumes. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object EmbeddedMetadata contains metadata relevant to an EmbeddedResource. spec object Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status object Deprecated: this field is never set. 2.1.249. .spec.storage.volumeClaimTemplate.metadata Description EmbeddedMetadata contains metadata relevant to an EmbeddedResource. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 2.1.250. .spec.storage.volumeClaimTemplate.spec Description Defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.251. .spec.storage.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.252. .spec.storage.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 2.1.253. .spec.storage.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.254. .spec.storage.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.255. .spec.storage.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 2.1.256. .spec.storage.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.257. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.258. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.259. .spec.storage.volumeClaimTemplate.status Description Deprecated: this field is never set. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResourceStatuses object (string) allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. ClaimResourceStatus can be in any of following states: - ControllerResizeInProgress: State set when resize controller starts resizing the volume in control-plane. - ControllerResizeFailed: State set when resize has failed in resize controller with a terminal error. - NodeResizePending: State set when resize controller has finished resizing the volume but further resizing of volume is needed on the node. - NodeResizeInProgress: State set when kubelet starts resizing the volume. - NodeResizeFailed: State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed. For example: if expanding a PVC for more capacity - this field can be one of the following states: - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" When this field is not set, it means that no resize operation is in progress for the given PVC. A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. allocatedResources integer-or-string allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity integer-or-string capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc phase string phase represents the current phase of PersistentVolumeClaim. 2.1.260. .spec.storage.volumeClaimTemplate.status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array 2.1.261. .spec.storage.volumeClaimTemplate.status.conditions[] Description PersistentVolumeClaimCondition contains details about state of pvc Type object Required status type Property Type Description lastProbeTime string lastProbeTime is the time we probed the condition. lastTransitionTime string lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string PersistentVolumeClaimConditionType is a valid value of PersistentVolumeClaimCondition.Type 2.1.262. .spec.tolerations Description If specified, the pod's tolerations. Type array 2.1.263. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 2.1.264. .spec.topologySpreadConstraints Description If specified, the pod's topology spread constraints. Type array 2.1.265. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 2.1.266. .spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.267. .spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.268. .spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.269. .spec.volumeMounts Description VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the alertmanager container, that are generated as a result of StorageSpec objects. Type array 2.1.270. .spec.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 2.1.271. .spec.volumes Description Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. Type array 2.1.272. .spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 2.1.273. .spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 2.1.274. .spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 2.1.275. .spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 2.1.276. .spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 2.1.277. .spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.278. .spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 2.1.279. .spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.280. .spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 2.1.281. .spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.282. .spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.283. .spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 2.1.284. .spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.285. .spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 2.1.286. .spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 2.1.287. .spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 2.1.288. .spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.289. .spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.290. .spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 2.1.291. .spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 2.1.292. .spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 2.1.293. .spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 2.1.294. .spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 2.1.295. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 2.1.296. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 2.1.297. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 2.1.298. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 2.1.299. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 2.1.300. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 2.1.301. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 2.1.302. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 2.1.303. .spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 2.1.304. .spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 2.1.305. .spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.306. .spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 2.1.307. .spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 2.1.308. .spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 2.1.309. .spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 2.1.310. .spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 2.1.311. .spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 2.1.312. .spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.313. .spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 2.1.314. .spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 2.1.315. .spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 2.1.316. .spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 2.1.317. .spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 2.1.318. .spec.volumes[].projected.sources Description sources is the list of volume projections Type array 2.1.319. .spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 2.1.320. .spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 2.1.321. .spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.322. .spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.323. .spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 2.1.324. .spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 2.1.325. .spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 2.1.326. .spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 2.1.327. .spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 2.1.328. .spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 2.1.329. .spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.330. .spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.331. .spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 2.1.332. .spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 2.1.333. .spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 2.1.334. .spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.335. .spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 2.1.336. .spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.337. .spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 2.1.338. .spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 2.1.339. .spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 2.1.340. .spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 2.1.341. .spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 2.1.342. .spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 2.1.343. .spec.web Description Defines the web command line flags when starting Alertmanager. Type object Property Type Description getConcurrency integer Maximum number of GET requests processed concurrently. This corresponds to the Alertmanager's --web.get-concurrency flag. httpConfig object Defines HTTP parameters for web server. timeout integer Timeout for HTTP requests. This corresponds to the Alertmanager's --web.timeout flag. tlsConfig object Defines the TLS parameters for HTTPS. 2.1.344. .spec.web.httpConfig Description Defines HTTP parameters for web server. Type object Property Type Description headers object List of headers that can be added to HTTP responses. http2 boolean Enable HTTP/2 support. Note that HTTP/2 is only supported with TLS. When TLSConfig is not configured, HTTP/2 will be disabled. Whenever the value of the field changes, a rolling update will be triggered. 2.1.345. .spec.web.httpConfig.headers Description List of headers that can be added to HTTP responses. Type object Property Type Description contentSecurityPolicy string Set the Content-Security-Policy header to HTTP responses. Unset if blank. strictTransportSecurity string Set the Strict-Transport-Security header to HTTP responses. Unset if blank. Please make sure that you use this with care as this header might force browsers to load Prometheus and the other applications hosted on the same domain and subdomains over HTTPS. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security xContentTypeOptions string Set the X-Content-Type-Options header to HTTP responses. Unset if blank. Accepted value is nosniff. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options xFrameOptions string Set the X-Frame-Options header to HTTP responses. Unset if blank. Accepted values are deny and sameorigin. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options xXSSProtection string Set the X-XSS-Protection header to all responses. Unset if blank. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection 2.1.346. .spec.web.tlsConfig Description Defines the TLS parameters for HTTPS. Type object Required cert keySecret Property Type Description cert object Contains the TLS certificate for the server. cipherSuites array (string) List of supported cipher suites for TLS versions up to TLS 1.2. If empty, Go default cipher suites are used. Available cipher suites are documented in the go documentation: https://golang.org/pkg/crypto/tls/#pkg-constants clientAuthType string Server policy for client authentication. Maps to ClientAuth Policies. For more detail on clientAuth options: https://golang.org/pkg/crypto/tls/#ClientAuthType client_ca object Contains the CA certificate for client certificate authentication to the server. curvePreferences array (string) Elliptic curves that will be used in an ECDHE handshake, in preference order. Available curves are documented in the go documentation: https://golang.org/pkg/crypto/tls/#CurveID keySecret object Secret containing the TLS key for the server. maxVersion string Maximum TLS version that is acceptable. Defaults to TLS13. minVersion string Minimum TLS version that is acceptable. Defaults to TLS12. preferServerCipherSuites boolean Controls whether the server selects the client's most preferred cipher suite, or the server's most preferred cipher suite. If true then the server's preference, as expressed in the order of elements in cipherSuites, is used. 2.1.347. .spec.web.tlsConfig.cert Description Contains the TLS certificate for the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.348. .spec.web.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.349. .spec.web.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.350. .spec.web.tlsConfig.client_ca Description Contains the CA certificate for client certificate authentication to the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 2.1.351. .spec.web.tlsConfig.client_ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 2.1.352. .spec.web.tlsConfig.client_ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.353. .spec.web.tlsConfig.keySecret Description Secret containing the TLS key for the server. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 2.1.354. .status Description Most recent observed status of the Alertmanager cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required availableReplicas paused replicas unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this Alertmanager cluster. conditions array The current state of the Alertmanager object. conditions[] object Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. paused boolean Represents whether any actions on the underlying managed objects are being performed. Only delete actions will be performed. replicas integer Total number of non-terminated pods targeted by this Alertmanager object (their labels match the selector). unavailableReplicas integer Total number of unavailable pods targeted by this Alertmanager object. updatedReplicas integer Total number of non-terminated pods targeted by this Alertmanager object that have the desired version spec. 2.1.355. .status.conditions Description The current state of the Alertmanager object. Type array 2.1.356. .status.conditions[] Description Condition represents the state of the resources associated with the Prometheus, Alertmanager or ThanosRuler resource. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string Human-readable message indicating details for the condition's last transition. observedGeneration integer ObservedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string Reason for the condition's last transition. status string Status of the condition. type string Type of the condition being reported. 2.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/alertmanagers GET : list objects of kind Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers DELETE : delete collection of Alertmanager GET : list objects of kind Alertmanager POST : create an Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name} DELETE : delete an Alertmanager GET : read the specified Alertmanager PATCH : partially update the specified Alertmanager PUT : replace the specified Alertmanager /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name}/status GET : read status of the specified Alertmanager PATCH : partially update status of the specified Alertmanager PUT : replace status of the specified Alertmanager 2.2.1. /apis/monitoring.coreos.com/v1/alertmanagers HTTP method GET Description list objects of kind Alertmanager Table 2.1. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerList schema 401 - Unauthorized Empty 2.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers HTTP method DELETE Description delete collection of Alertmanager Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Alertmanager Table 2.3. HTTP responses HTTP code Reponse body 200 - OK AlertmanagerList schema 401 - Unauthorized Empty HTTP method POST Description create an Alertmanager Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body Alertmanager schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 202 - Accepted Alertmanager schema 401 - Unauthorized Empty 2.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name} Table 2.7. Global path parameters Parameter Type Description name string name of the Alertmanager HTTP method DELETE Description delete an Alertmanager Table 2.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Alertmanager Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Alertmanager Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.12. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Alertmanager Table 2.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.14. Body parameters Parameter Type Description body Alertmanager schema Table 2.15. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 401 - Unauthorized Empty 2.2.4. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/alertmanagers/{name}/status Table 2.16. Global path parameters Parameter Type Description name string name of the Alertmanager HTTP method GET Description read status of the specified Alertmanager Table 2.17. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Alertmanager Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.19. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Alertmanager Table 2.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.21. Body parameters Parameter Type Description body Alertmanager schema Table 2.22. HTTP responses HTTP code Reponse body 200 - OK Alertmanager schema 201 - Created Alertmanager schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/monitoring_apis/alertmanager-monitoring-coreos-com-v1 |
Chapter 3. Setting up the environment for an OpenShift installation | Chapter 3. Setting up the environment for an OpenShift installation 3.1. Installing RHEL on the provisioner node With the configuration of the prerequisites complete, the step is to install RHEL 9.x on the provisioner node. The installer uses the provisioner node as the orchestrator while installing the OpenShift Container Platform cluster. For the purposes of this document, installing RHEL on the provisioner node is out of scope. However, options include but are not limited to using a RHEL Satellite server, PXE, or installation media. 3.2. Preparing the provisioner node for OpenShift Container Platform installation Perform the following steps to prepare the environment. Procedure Log in to the provisioner node via ssh . Create a non-root user ( kni ) and provide that user with sudo privileges: # useradd kni # passwd kni # echo "kni ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/kni # chmod 0440 /etc/sudoers.d/kni Create an ssh key for the new user: # su - kni -c "ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''" Log in as the new user on the provisioner node: # su - kni Use Red Hat Subscription Manager to register the provisioner node: USD sudo subscription-manager register --username=<user> --password=<pass> --auto-attach USD sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms Note For more information about Red Hat Subscription Manager, see Using and Configuring Red Hat Subscription Manager . Install the following packages: USD sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool Modify the user to add the libvirt group to the newly created user: USD sudo usermod --append --groups libvirt <user> Restart firewalld and enable the http service: USD sudo systemctl start firewalld USD sudo firewall-cmd --zone=public --add-service=http --permanent USD sudo firewall-cmd --reload Start and enable the libvirtd service: USD sudo systemctl enable libvirtd --now Create the default storage pool and start it: USD sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images USD sudo virsh pool-start default USD sudo virsh pool-autostart default Create a pull-secret.txt file: USD vim pull-secret.txt In a web browser, navigate to Install OpenShift on Bare Metal with installer-provisioned infrastructure . Click Copy pull secret . Paste the contents into the pull-secret.txt file and save the contents in the kni user's home directory. 3.3. Checking NTP server synchronization The OpenShift Container Platform installation program installs the chrony Network Time Protocol (NTP) service on the cluster nodes. To complete installation, each node must have access to an NTP time server. You can verify NTP server synchronization by using the chrony service. For disconnected clusters, you must configure the NTP servers on the control plane nodes. For more information see the Additional resources section. Prerequisites You installed the chrony package on the target node. Procedure Log in to the node by using the ssh command. View the NTP servers available to the node by running the following command: USD chronyc sources Example output MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms Use the ping command to ensure that the node can access an NTP server, for example: USD ping time.cloudflare.com Example output PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms ... Additional resources Optional: Configuring NTP for disconnected clusters Network Time Protocol (NTP) 3.4. Configuring networking Before installation, you must configure the networking on the provisioner node. Installer-provisioned clusters deploy with a bare-metal bridge and network, and an optional provisioning bridge and network. Note You can also configure networking from the web console. Procedure Export the bare-metal network NIC name by running the following command: USD export PUB_CONN=<baremetal_nic_name> Configure the bare-metal network: Note The SSH connection might disconnect after executing these steps. For a network using DHCP, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal pkill dhclient;dhclient baremetal " 1 Replace <con_name> with the connection name. For a network using static IP addressing and no DHCP network, run the following command: USD sudo nohup bash -c " nmcli con down \"USDPUB_CONN\" nmcli con delete \"USDPUB_CONN\" # RHEL 8.1 appends the word \"System\" in front of the connection, delete in case it exists nmcli con down \"System USDPUB_CONN\" nmcli con delete \"System USDPUB_CONN\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr "x.x.x.x/yy" ipv4.gateway "a.a.a.a" ipv4.dns "b.b.b.b" 1 nmcli con add type bridge-slave ifname \"USDPUB_CONN\" master baremetal nmcli con up baremetal " 1 Replace <con_name> with the connection name. Replace x.x.x.x/yy with the IP address and CIDR for the network. Replace a.a.a.a with the network gateway. Replace b.b.b.b with the IP address of the DNS server. Optional: If you are deploying with a provisioning network, export the provisioning network NIC name by running the following command: USD export PROV_CONN=<prov_nic_name> Optional: If you are deploying with a provisioning network, configure the provisioning network by running the following command: USD sudo nohup bash -c " nmcli con down \"USDPROV_CONN\" nmcli con delete \"USDPROV_CONN\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \"USDPROV_CONN\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning " Note The SSH connection might disconnect after executing these steps. The IPv6 address can be any address that is not routable through the bare-metal network. Ensure that UEFI is enabled and UEFI PXE settings are set to the IPv6 protocol when using IPv6 addressing. Optional: If you are deploying with a provisioning network, configure the IPv4 address on the provisioning network connection by running the following command: USD nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual SSH back into the provisioner node (if required) by running the following command: # ssh kni@provisioner.<cluster-name>.<domain> Verify that the connection bridges have been properly created by running the following command: USD sudo nmcli con show Example output NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2 3.5. Establishing communication between subnets In a typical OpenShift Container Platform cluster setup, all nodes, including the control plane and worker nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate worker nodes closer to the edge. This often involves using different network segments or subnets for the remote worker nodes than the subnet used by the control plane and local worker nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability. However, the network must be configured properly before installing OpenShift Container Platform to ensure that the edge subnets containing the remote worker nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too. Important All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. Deploying a cluster with multiple subnets requires using virtual media. This procedure details the network configuration required to allow the remote worker nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote worker nodes in the second subnet. In this procedure, the cluster spans two subnets: The first subnet ( 10.0.0.0 ) contains the control plane and local worker nodes. The second subnet ( 192.168.0.0 ) contains the edge worker nodes. Procedure Configure the first subnet to communicate with the second subnet: Log in as root to a control plane node by running the following command: USD sudo su - Get the name of the network interface: # nmcli dev status Add a route to the second subnet ( 192.168.0.0 ) via the gateway: s+ + Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. + .Example + # nmcli connection modify eth0 +ipv4.routes "192.168.0.0/24 via 192.168.0.1" Apply the changes: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully: # ip route Repeat the steps for each control plane node in the first subnet. Note Adjust the commands to match your actual interface names and gateway. Configure the second subnet to communicate with the first subnet: Log in as root to a remote worker node: USD sudo su - Get the name of the network interface: # nmcli dev status Add a route to the first subnet ( 10.0.0.0 ) via the gateway: # nmcli connection modify <interface_name> +ipv4.routes "10.0.0.0/24 via <gateway>" Replace <interface_name> with the interface name. Replace <gateway> with the IP address of the actual gateway. Example # nmcli connection modify eth0 +ipv4.routes "10.0.0.0/24 via 10.0.0.1" Apply the changes: # nmcli connection up <interface_name> Replace <interface_name> with the interface name. Verify the routing table to ensure the route has been added successfully: # ip route Repeat the steps for each worker node in the second subnet. Note Adjust the commands to match your actual interface names and gateway. Once you have configured the networks, test the connectivity to ensure the remote worker nodes can reach the control plane nodes and the control plane nodes can reach the remote worker nodes. From the control plane nodes in the first subnet, ping a remote worker node in the second subnet: USD ping <remote_worker_node_ip_address> If the ping is successful, it means the control plane nodes in the first subnet can reach the remote worker nodes in the second subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node. From the remote worker nodes in the second subnet, ping a control plane node in the first subnet: USD ping <control_plane_node_ip_address> If the ping is successful, it means the remote worker nodes in the second subnet can reach the control plane in the first subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node. 3.6. Retrieving the OpenShift Container Platform installer Use the stable-4.x version of the installation program and your selected architecture to deploy the generally available stable version of OpenShift Container Platform: USD export VERSION=stable-4.13 USD export RELEASE_ARCH=<architecture> USD export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}') 3.7. Extracting the OpenShift Container Platform installer After retrieving the installer, the step is to extract it. Procedure Set the environment variables: USD export cmd=openshift-baremetal-install USD export pullsecret_file=~/pull-secret.txt USD export extract_dir=USD(pwd) Get the oc binary: USD curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc Extract the installer: USD sudo cp oc /usr/local/bin USD oc adm release extract --registry-config "USD{pullsecret_file}" --command=USDcmd --to "USD{extract_dir}" USD{RELEASE_IMAGE} USD sudo cp openshift-baremetal-install /usr/local/bin 3.8. Optional: Creating an RHCOS images cache To employ image caching, you must download the Red Hat Enterprise Linux CoreOS (RHCOS) image used by the bootstrap VM to provision the cluster nodes. Image caching is optional, but it is especially useful when running the installation program on a network with limited bandwidth. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. If you are running the installation program on a network with limited bandwidth and the RHCOS images download takes more than 15 to 20 minutes, the installation program will timeout. Caching images on a web server will help in such scenarios. Warning If you enable TLS for the HTTPD server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and spoke clusters and the HTTPD server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. Install a container that contains the images. Procedure Install podman : USD sudo dnf install -y podman Open firewall port 8080 to be used for RHCOS image caching: USD sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent USD sudo firewall-cmd --reload Create a directory to store the bootstraposimage : USD mkdir /home/kni/rhcos_image_cache Set the appropriate SELinux context for the newly created directory: USD sudo semanage fcontext -a -t httpd_sys_content_t "/home/kni/rhcos_image_cache(/.*)?" USD sudo restorecon -Rv /home/kni/rhcos_image_cache/ Get the URI for the RHCOS image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk.location') Get the name of the image that the installation program will deploy on the bootstrap VM: USD export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/} Get the SHA hash for the RHCOS image that will be deployed on the bootstrap VM: USD export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH "USD(arch)" '.architectures[USDARCH].artifacts.qemu.formats["qcow2.gz"].disk["uncompressed-sha256"]') Download the image and place it in the /home/kni/rhcos_image_cache directory: USD curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME} Confirm SELinux type is of httpd_sys_content_t for the new file: USD ls -Z /home/kni/rhcos_image_cache Create the pod: USD podman run -d --name rhcos_image_cache \ 1 -v /home/kni/rhcos_image_cache:/var/www/html \ -p 8080:8080/tcp \ registry.access.redhat.com/ubi9/httpd-24 1 Creates a caching webserver with the name rhcos_image_cache . This pod serves the bootstrapOSImage image in the install-config.yaml file for deployment. Generate the bootstrapOSImage configuration: USD export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d"/" -f1) USD export BOOTSTRAP_OS_IMAGE="http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}" USD echo " bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}" Add the required configuration to the install-config.yaml file under platform.baremetal : platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1 1 Replace <bootstrap_os_image> with the value of USDBOOTSTRAP_OS_IMAGE . See the "Configuring the install-config.yaml file" section for additional details. 3.9. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, NetworkManager sets the hostnames. By default, DHCP provides the hostnames to NetworkManager , which is the recommended method. NetworkManager gets the hostnames through a reverse DNS lookup in the following cases: If DHCP does not provide the hostnames If you use kernel arguments to set the hostnames If you use another method to set the hostnames Reverse DNS lookup occurs after the network has been initialized on a node, and can increase the time it takes NetworkManager to set the hostname. Other system services can start prior to NetworkManager setting the hostname, which can cause those services to use a default hostname such as localhost . Tip You can avoid the delay in setting hostnames by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.10. Configuring the install-config.yaml file 3.10.1. Configuring the install-config.yaml file The install-config.yaml file requires some additional details. Most of the information teaches the installation program and the resulting cluster enough about the available hardware that it is able to fully manage it. Note The installation program no longer needs the clusterOSImage RHCOS image because the correct image is in the release payload. Configure install-config.yaml . Change the appropriate variables to match the environment, including pullSecret and sshKey : apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 4 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" 5 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "<installation_disk_drive_path>" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>' 1 Scale the worker machines based on the number of worker nodes that are part of the OpenShift Container Platform cluster. Valid options for the replicas value are 0 and integers greater than or equal to 2 . Set the number of replicas to 0 to deploy a three-node cluster, which contains only three control plane machines. A three-node cluster is a smaller, more resource-efficient cluster that can be used for testing, development, and production. You cannot install the cluster with only one worker. 2 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticIP configuration setting to specify the static IP address of the bootstrap VM when there is no DHCP server on the bare-metal network. 3 When deploying a cluster with static IP addresses, you must set the bootstrapExternalStaticGateway configuration setting to specify the gateway IP address for the bootstrap VM when there is no DHCP server on the bare-metal network. 4 See the BMC addressing sections for more options. 5 To set the path to the installation disk drive, enter the kernel name of the disk. For example, /dev/sda . Important Because the disk discovery order is not guaranteed, the kernel name of the disk can change across booting options for machines with multiple disks. For example, /dev/sda becomes /dev/sdb and vice versa. To avoid this issue, you must use persistent disk attributes, such as the disk World Wide Name (WWN) or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. To use the disk WWN, replace the deviceName parameter with the wwnWithExtension parameter. Depending on the parameter that you use, enter either of the following values: The disk name. For example, /dev/sda , or /dev/disk/by-path/ . The disk WWN. For example, "0x64cd98f04fde100024684cf3034da5c2" . Ensure that you enter the disk WWN value within quotes so that it is used as a string value and not a hexadecimal value. Failure to meet these requirements for the rootDeviceHints parameter might result in the following error: ironic-inspector inspection failed: No disks satisfied root device hints Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP and ingressVIP configuration settings. In OpenShift Container Platform 4.12 and later, these configuration settings are deprecated. Instead, use a list format in the apiVIPs and ingressVIPs configuration settings to specify IPv4 addresses, IPv6 addresses, or both IP address formats. Create a directory to store the cluster configuration: USD mkdir ~/clusterconfigs Copy the install-config.yaml file to the new directory: USD cp install-config.yaml ~/clusterconfigs Ensure all bare metal nodes are powered off prior to installing the OpenShift Container Platform cluster: USD ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off Remove old bootstrap resources if any are left over from a deployment attempt: for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done 3.10.2. Additional install-config parameters See the following tables for the required parameters, the hosts parameter, and the bmc parameter for the install-config.yaml file. Table 3.1. Required parameters Parameters Default Description baseDomain The domain name for the cluster. For example, example.com . bootMode UEFI The boot mode for a node. Options are legacy , UEFI , and UEFISecureBoot . If bootMode is not set, Ironic sets it while inspecting the node. bootstrapExternalStaticIP The static IP address for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. bootstrapExternalStaticGateway The static IP address of the gateway for the bootstrap VM. You must set this value when deploying a cluster with static IP addresses when there is no DHCP server on the bare-metal network. sshKey The sshKey configuration setting contains the key in the ~/.ssh/id_rsa.pub file required to access the control plane nodes and worker nodes. Typically, this key is from the provisioner node. pullSecret The pullSecret configuration setting contains a copy of the pull secret downloaded from the Install OpenShift on Bare Metal page when preparing the provisioner node. The name to be given to the OpenShift Container Platform cluster. For example, openshift . The public CIDR (Classless Inter-Domain Routing) of the external network. For example, 10.0.0.0/24 . The OpenShift Container Platform cluster requires a name be provided for worker (or compute) nodes even if there are zero nodes. Replicas sets the number of worker (or compute) nodes in the OpenShift Container Platform cluster. The OpenShift Container Platform cluster requires a name for control plane (master) nodes. Replicas sets the number of control plane (master) nodes included as part of the OpenShift Container Platform cluster. provisioningNetworkInterface The name of the network interface on nodes connected to the provisioning network. For OpenShift Container Platform 4.9 and later releases, use the bootMACAddress configuration setting to enable Ironic to identify the IP address of the NIC instead of using the provisioningNetworkInterface configuration setting to identify the name of the NIC. defaultMachinePlatform The default configuration used for machine pools without a platform configuration. apiVIPs (Optional) The virtual IP address for Kubernetes API communication. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the apiVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses api.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the apiVIP configuration setting. From OpenShift Container Platform 4.12 or later, the apiVIP configuration setting is deprecated. Instead, use a list format for the apiVIPs configuration setting to specify an IPv4 address, an IPv6 address or both IP address formats. disableCertificateVerification False redfish and redfish-virtualmedia need this parameter to manage BMC addresses. The value should be True when using a self-signed certificate for BMC addresses. ingressVIPs (Optional) The virtual IP address for ingress traffic. This setting must either be provided in the install-config.yaml file as a reserved IP from the MachineNetwork or preconfigured in the DNS so that the default name resolves correctly. Use the virtual IP address and not the FQDN when adding a value to the ingressVIPs configuration setting in the install-config.yaml file. The primary IP address must be from the IPv4 network when using dual stack networking. If not set, the installation program uses test.apps.<cluster_name>.<base_domain> to derive the IP address from the DNS. Note Before OpenShift Container Platform 4.12, the cluster installation program only accepted an IPv4 address or an IPv6 address for the ingressVIP configuration setting. In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a list format for the ingressVIPs configuration setting to specify an IPv4 addresses, an IPv6 addresses or both IP address formats. Table 3.2. Optional Parameters Parameters Default Description provisioningDHCPRange 172.22.0.10,172.22.0.100 Defines the IP range for nodes on the provisioning network. provisioningNetworkCIDR 172.22.0.0/24 The CIDR for the network to use for provisioning. This option is required when not using the default address range on the provisioning network. clusterProvisioningIP The third IP address of the provisioningNetworkCIDR . The IP address within the cluster where the provisioning services run. Defaults to the third IP address of the provisioning subnet. For example, 172.22.0.3 . bootstrapProvisioningIP The second IP address of the provisioningNetworkCIDR . The IP address on the bootstrap VM where the provisioning services run while the installer is deploying the control plane (master) nodes. Defaults to the second IP address of the provisioning subnet. For example, 172.22.0.2 or 2620:52:0:1307::2 . externalBridge baremetal The name of the bare-metal bridge of the hypervisor attached to the bare-metal network. provisioningBridge provisioning The name of the provisioning bridge on the provisioner host attached to the provisioning network. architecture Defines the host architecture for your cluster. Valid values are amd64 or arm64 . defaultMachinePlatform The default configuration used for machine pools without a platform configuration. bootstrapOSImage A URL to override the default operating system image for the bootstrap node. The URL must contain a SHA-256 hash of the image. For example: https://mirror.openshift.com/rhcos-<version>-qemu.qcow2.gz?sha256=<uncompressed_sha256> . provisioningNetwork The provisioningNetwork configuration setting determines whether the cluster uses the provisioning network. If it does, the configuration setting also determines if the cluster manages the network. Disabled : Set this parameter to Disabled to disable the requirement for a provisioning network. When set to Disabled , you must only use virtual media based provisioning, or bring up the cluster using the assisted installer. If Disabled and using power management, BMCs must be accessible from the bare-metal network. If Disabled , you must provide two IP addresses on the bare-metal network that are used for the provisioning services. Managed : Set this parameter to Managed , which is the default, to fully manage the provisioning network, including DHCP, TFTP, and so on. Unmanaged : Set this parameter to Unmanaged to enable the provisioning network but take care of manual configuration of DHCP. Virtual media provisioning is recommended but PXE is still available if required. httpProxy Set this parameter to the appropriate HTTP proxy used within your environment. httpsProxy Set this parameter to the appropriate HTTPS proxy used within your environment. noProxy Set this parameter to the appropriate list of exclusions for proxy usage within your environment. Hosts The hosts parameter is a list of separate bare metal assets used to build the cluster. Table 3.3. Hosts Name Default Description name The name of the BareMetalHost resource to associate with the details. For example, openshift-master-0 . role The role of the bare metal node. Either master or worker . bmc Connection details for the baseboard management controller. See the BMC addressing section for additional details. bootMACAddress The MAC address of the NIC that the host uses for the provisioning network. Ironic retrieves the IP address using the bootMACAddress configuration setting. Then, it binds to the host. Note You must provide a valid MAC address from the host if you disabled the provisioning network. networkConfig Set this optional parameter to configure the network interface of a host. See "(Optional) Configuring host network interfaces" for additional details. 3.10.3. BMC addressing Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI. IPMI Hosts using IPMI use the ipmi://<out-of-band-ip>:<port> address format, which defaults to port 623 if not specified. The following example demonstrates an IPMI configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password> Important The provisioning network is required when PXE booting using IPMI for BMC addressing. It is not possible to PXE boot hosts without a provisioning network. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. Redfish network boot To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Redfish APIs Several redfish API endpoints are called onto your BCM when using the bare-metal installer-provisioned infrastructure. Important You need to ensure that your BMC supports all of the redfish APIs before installation. List of redfish APIs Power on curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "On"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Power off curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{"ResetType": "ForceOff"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset Temporary boot using pxe curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "pxe", "BootSourceOverrideEnabled": "Once"}} Set BIOS boot mode using Legacy or UEFI curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideMode":"UEFI"}} List of redfish-virtualmedia APIs Set temporary boot device using cd or dvd curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{"Boot": {"BootSourceOverrideTarget": "cd", "BootSourceOverrideEnabled": "Once"}}' Mount virtual media curl -u USDUSER:USDPASS -X PATCH -H "Content-Type: application/json" -H "If-Match: *" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{"Image": "https://example.com/test.iso", "TransferProtocolType": "HTTPS", "UserName": "", "Password":""}' Note The PowerOn and PowerOff commands for redfish APIs are the same for the redfish-virtualmedia APIs. Important HTTPS and HTTP are the only supported parameter types for TransferProtocolTypes . 3.10.4. BMC addressing for Dell iDRAC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Dell hardware, Red Hat supports integrated Dell Remote Access Controller (iDRAC) virtual media, Redfish network boot, and IPMI. BMC address formats for Dell iDRAC Protocol Address Format iDRAC virtual media idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 IPMI ipmi://<out-of-band-ip> Important Use idrac-virtualmedia as the protocol for Redfish virtual media. redfish-virtualmedia will not work on Dell hardware. Dell's idrac-virtualmedia uses the Redfish standard with Dell's OEM extensions. See the following sections for additional details. Redfish virtual media for Dell iDRAC For Redfish virtual media on Dell servers, use idrac-virtualmedia:// in the address setting. Using redfish-virtualmedia:// will not work. Note Use idrac-virtualmedia:// as the protocol for Redfish virtual media. Using redfish-virtualmedia:// will not work on Dell hardware, because the idrac-virtualmedia:// protocol corresponds to the idrac hardware type and the Redfish protocol in Ironic. Dell's idrac-virtualmedia:// protocol uses the Redfish standard with Dell's OEM extensions. Ironic also supports the idrac type with the WSMAN protocol. Therefore, you must specify idrac-virtualmedia:// to avoid unexpected behavior when electing to use Redfish with virtual media on Dell hardware. The following example demonstrates using iDRAC virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. Note Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Redfish network boot for iDRAC To enable Redfish, use redfish:// or redfish+http:// to disable transport layer security (TLS). The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True Note There is a known issue on Dell iDRAC 9 with firmware version 04.40.00.00 and all releases up to including the 5.xx series for installer-provisioned installations on bare metal deployments. The virtual console plugin defaults to eHTML5, an enhanced version of HTML5, which causes problems with the InsertVirtualMedia workflow. Set the plugin to use HTML5 to avoid this issue. The menu path is Configuration Virtual console Plug-in Type HTML5 . Ensure the OpenShift Container Platform cluster nodes have AutoAttach enabled through the iDRAC console. The menu path is: Configuration Virtual Media Attach Mode AutoAttach . 3.10.5. BMC addressing for HPE iLO The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For HPE integrated Lights Out (iLO), Red Hat supports Redfish virtual media, Redfish network boot, and IPMI. Table 3.4. BMC address formats for HPE iLO Protocol Address Format Redfish virtual media redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 Redfish network boot redfish://<out-of-band-ip>/redfish/v1/Systems/1 IPMI ipmi://<out-of-band-ip> See the following sections for additional details. Redfish virtual media for HPE iLO To enable Redfish virtual media for HPE servers, use redfish-virtualmedia:// in the address setting. The following example demonstrates using Redfish virtual media within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True Note Redfish virtual media is not supported on 9th generation systems running iLO4, because Ironic does not support iLO4 with virtual media. Redfish network boot for HPE iLO To enable Redfish, use redfish:// or redfish+http:// to disable TLS. The installer requires both the hostname or the IP address and the path to the system ID. The following example demonstrates a Redfish configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> While it is recommended to have a certificate of authority for the out-of-band management addresses, you must include disableCertificateVerification: True in the bmc configuration if using self-signed certificates. The following example demonstrates a Redfish configuration using the disableCertificateVerification: True configuration parameter within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True 3.10.6. BMC addressing for Fujitsu iRMC The address field for each bmc entry is a URL for connecting to the OpenShift Container Platform cluster nodes, including the type of controller in the URL scheme and its location on the network. platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password> 1 The address configuration setting specifies the protocol. For Fujitsu hardware, Red Hat supports integrated Remote Management Controller (iRMC) and IPMI. Table 3.5. BMC address formats for Fujitsu iRMC Protocol Address Format iRMC irmc://<out-of-band-ip> IPMI ipmi://<out-of-band-ip> iRMC Fujitsu nodes can use irmc://<out-of-band-ip> and defaults to port 443 . The following example demonstrates an iRMC configuration within the install-config.yaml file. platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password> Note Currently Fujitsu supports iRMC S5 firmware version 3.05P and above for installer-provisioned installation on bare metal. 3.10.7. Root device hints The rootDeviceHints parameter enables the installer to provision the Red Hat Enterprise Linux CoreOS (RHCOS) image to a particular device. The installer examines the devices in the order it discovers them, and compares the discovered values with the hint values. The installer uses the first discovered device that matches the hint value. The configuration can combine multiple hints, but a device must match all hints for the installer to select it. Table 3.6. Subfields Subfield Description deviceName A string containing a Linux device name such as /dev/vda or /dev/disk/by-path/ . It is recommended to use the /dev/disk/by-path/<device_path> link to the storage location. The hint must match the actual value exactly. hctl A string containing a SCSI bus address like 0:0:0:0 . The hint must match the actual value exactly. model A string containing a vendor-specific device identifier. The hint can be a substring of the actual value. vendor A string containing the name of the vendor or manufacturer of the device. The hint can be a sub-string of the actual value. serialNumber A string containing the device serial number. The hint must match the actual value exactly. minSizeGigabytes An integer representing the minimum size of the device in gigabytes. wwn A string containing the unique storage identifier. The hint must match the actual value exactly. wwnWithExtension A string containing the unique storage identifier with the vendor extension appended. The hint must match the actual value exactly. wwnVendorExtension A string containing the unique vendor storage identifier. The hint must match the actual value exactly. rotational A boolean indicating whether the device should be a rotating disk (true) or not (false). Example usage - name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: "/dev/sda" 3.10.8. Optional: Setting proxy settings To deploy an OpenShift Container Platform cluster using a proxy, make the following changes to the install-config.yaml file. apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR> The following is an example of noProxy with values. noProxy: .example.com,172.22.0.0/24,10.10.0.0/24 With a proxy enabled, set the appropriate values of the proxy in the corresponding key/value pair. Key considerations: If the proxy does not have an HTTPS proxy, change the value of httpsProxy from https:// to http:// . If using a provisioning network, include it in the noProxy setting, otherwise the installer will fail. Set all of the proxy settings as environment variables within the provisioner node. For example, HTTP_PROXY , HTTPS_PROXY , and NO_PROXY . Note When provisioning with IPv6, you cannot define a CIDR address block in the noProxy settings. You must define each address separately. 3.10.9. Optional: Deploying with no provisioning network To deploy an OpenShift Container Platform cluster without a provisioning network, make the following changes to the install-config.yaml file. platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: "Disabled" 1 1 Add the provisioningNetwork configuration setting, if needed, and set it to Disabled . Important The provisioning network is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . See "Redfish virtual media for HPE iLO" in the "BMC addressing for HPE iLO" section or "Redfish virtual media for Dell iDRAC" in the "BMC addressing for Dell iDRAC" section for additional details. 3.10.10. Optional: Deploying with dual-stack networking For dual-stack networking in OpenShift Container Platform clusters, you can configure IPv4 and IPv6 address endpoints for cluster nodes. To configure IPv4 and IPv6 address endpoints for cluster nodes, edit the machineNetwork , clusterNetwork , and serviceNetwork configuration settings in the install-config.yaml file. Each setting must have two CIDR entries each. For a cluster with the IPv4 family as the primary address family, specify the IPv4 setting first. For a cluster with the IPv6 family as the primary address family, specify the IPv6 setting first. machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112 Important On a bare-metal platform, if you specified an NMState configuration in the networkConfig section of your install-config.yaml file, add interfaces.wait-ip: ipv4+ipv6 to the NMState YAML file to resolve an issue that prevents your cluster from deploying on a dual-stack network. Example NMState YAML configuration file that includes the wait-ip parameter networkConfig: nmstate: interfaces: - name: <interface_name> # ... wait-ip: ipv4+ipv6 # ... To provide an interface to the cluster for applications that use IPv4 and IPv6 addresses, configure IPv4 and IPv6 virtual IP (VIP) address endpoints for the Ingress VIP and API VIP services. To configure IPv4 and IPv6 address endpoints, edit the apiVIPs and ingressVIPs configuration settings in the install-config.yaml file . The apiVIPs and ingressVIPs configuration settings use a list format. The order of the list indicates the primary and secondary VIP address for each service. platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6> 3.10.11. Optional: Configuring host network interfaces Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces using NMState. The most common use case for this functionality is to specify a static IP address on the bare-metal network, but you can also configure other networks such as a storage network. This functionality supports other NMState features such as VLAN, VXLAN, bridges, bonds, routes, MTU, and DNS resolver settings. Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState syntax with nmstatectl gc before including it in the install-config.yaml file, because the installer will not check the NMState YAML syntax. Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes using Kubernetes NMState after deployment or when expanding the cluster. Create an NMState YAML file: interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 4 -hop-interface: <next_hop_nic1_name> 5 1 2 3 4 5 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> Replace <nmstate_yaml_file> with the configuration file name. Use the networkConfig configuration setting by adding the NMState configuration to hosts within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 -hop-address: <next_hop_ip_address> 5 -hop-interface: <next_hop_nic1_name> 6 1 Add the NMState YAML syntax to configure the host interfaces. 2 3 4 5 6 Replace <nic1_name> , <ip_address> , <dns_ip_address> , <next_hop_ip_address> and <next_hop_nic1_name> with appropriate values. Important After deploying the cluster, you cannot modify the networkConfig configuration setting of install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. 3.10.12. Configuring host network interfaces for subnets For edge computing scenarios, it can be beneficial to locate compute nodes closer to the edge. To locate remote nodes in subnets, you might use different network segments or subnets for the remote nodes than you used for the control plane subnet and local compute nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios. Important When using the default load balancer, OpenShiftManagedDefault and adding remote nodes to your OpenShift Container Platform cluster, all control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. If you have established different network segments or subnets for remote nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the machineNetwork configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the networkConfig parameter for each remote node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures that the remote nodes can reach the subnet containing the control plane and that they can receive network traffic from the control plane. Note Deploying a cluster with multiple subnets requires using virtual media, such as redfish-virtualmedia or idrac-virtualmedia , because remote nodes cannot access the local provisioning network. Procedure Add the subnets to the machineNetwork in the install-config.yaml file when using static IP addresses: networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes Add the gateway and DNS configuration to the networkConfig parameter of each edge compute node using NMState syntax when using a static IP address or advanced networking such as bonds: networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4 1 Replace <interface_name> with the interface name. 2 Replace <node_ip> with the IP address of the node. 3 Replace <gateway_ip> with the IP address of the gateway. 4 Replace <dns_ip> with the IP address of the DNS server. 3.10.13. Optional: Configuring address generation modes for SLAAC in dual-stack networks For dual-stack clusters that use Stateless Address AutoConfiguration (SLAAC), you must specify a global value for the ipv6.addr-gen-mode network setting. You can set this value using NMState to configure the ramdisk and the cluster configuration files. If you don't configure a consistent ipv6.addr-gen-mode in these locations, IPv6 address mismatches can occur between CSR resources and BareMetalHost resources in the cluster. Prerequisites Install the NMState CLI ( nmstate ). Procedure Optional: Consider testing the NMState YAML syntax with the nmstatectl gc command before including it in the install-config.yaml file because the installation program will not check the NMState YAML syntax. Create an NMState YAML file: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . Test the configuration file by running the following command: USD nmstatectl gc <nmstate_yaml_file> 1 1 Replace <nmstate_yaml_file> with the name of the test configuration file. Add the NMState configuration to the hosts.networkConfig section within the install-config.yaml file: hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: "/dev/sda" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1 ... 1 Replace <address_mode> with the type of address generation mode required for IPv6 addresses in the cluster. Valid values are eui64 , stable-privacy , or random . 3.10.14. Optional: Configuring host network interfaces for dual port NIC Before installation, you can set the networkConfig configuration setting in the install-config.yaml file to configure host network interfaces by using NMState to support dual port NIC. Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . OpenShift Virtualization only supports the following bond modes: mode=1 active-backup mode=2 balance-xor mode=4 802.3ad Prerequisites Configure a PTR DNS record with a valid hostname for each node with a static IP address. Install the NMState CLI ( nmstate ). Note Errors in the YAML syntax might result in a failure to apply the network configuration. Additionally, maintaining the validated YAML syntax is useful when applying changes by using Kubernetes NMState after deployment or when expanding the cluster. Procedure Add the NMState configuration to the networkConfig field to hosts within the install-config.yaml file: hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 -hop-address: 10.19.17.254 -hop-interface: bond0 14 table-id: 254 1 The networkConfig field has information about the network configuration of the host, with subfields including interfaces , dns-resolver , and routes . 2 The interfaces field is an array of network interfaces defined for the host. 3 The name of the interface. 4 The type of interface. This example creates a ethernet interface. 5 Set this to `false to disable DHCP for the physical function (PF) if it is not strictly required. 6 Set to the number of SR-IOV virtual functions (VFs) to instantiate. 7 Set this to up . 8 Set this to false to disable IPv4 addressing for the VF attached to the bond. 9 Sets a minimum transmission rate, in Mbps, for the VF. This sample value sets a rate of 100 Mbps. This value must be less than or equal to the maximum transmission rate. Intel NICs do not support the min-tx-rate parameter. For more information, see BZ#1772847 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. 11 Sets the desired bond mode. 12 Sets the preferred port of the bonding interface. The bond uses the primary device as the first device of the bonding interfaces. The bond does not abandon the primary device interface unless it fails. This setting is particularly useful when one NIC in the bonding interface is faster and, therefore, able to handle a bigger load. This setting is only valid when the bonding interface is in active-backup mode (mode 1) and balance-tlb (mode 5). 13 Sets a static IP address for the bond interface. This is the node IP address. 14 Sets bond0 as the gateway for the default route. Important After deploying the cluster, you cannot change the networkConfig configuration setting of the install-config.yaml file to make changes to the host network interface. Use the Kubernetes NMState Operator to make changes to the host network interface after deployment. Additional resources Configuring network bonding 3.10.15. Configuring multiple cluster nodes You can simultaneously configure OpenShift Container Platform cluster nodes with identical settings. Configuring multiple cluster nodes avoids adding redundant information for each node to the install-config.yaml file. This file contains specific parameters to apply an identical configuration to multiple nodes in the cluster. Compute nodes are configured separately from the controller node. However, configurations for both node types use the highlighted parameters in the install-config.yaml file to enable multi-node configuration. Set the networkConfig parameters to BOND , as shown in the following example: hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND Note Configuration of multiple cluster nodes is only available for initial deployments on installer-provisioned infrastructure. 3.10.16. Optional: Configuring managed Secure Boot You can enable managed Secure Boot when deploying an installer-provisioned cluster using Redfish BMC addressing, such as redfish , redfish-virtualmedia , or idrac-virtualmedia . To enable managed Secure Boot, add the bootMode configuration setting to each node: Example hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: "/dev/sda" bootMode: UEFISecureBoot 2 1 Ensure the bmc.address setting uses redfish , redfish-virtualmedia , or idrac-virtualmedia as the protocol. See "BMC addressing for HPE iLO" or "BMC addressing for Dell iDRAC" for additional details. 2 The bootMode setting is UEFI by default. Change it to UEFISecureBoot to enable managed Secure Boot. Note See "Configuring nodes" in the "Prerequisites" to ensure the nodes can support managed Secure Boot. If the nodes do not support managed Secure Boot, see "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section. Configuring Secure Boot manually requires Redfish virtual media. Note Red Hat does not support Secure Boot with IPMI, because IPMI does not provide Secure Boot management facilities. 3.11. Manifest configuration files 3.11.1. Creating the OpenShift Container Platform manifests Create the OpenShift Container Platform manifests. USD ./openshift-baremetal-install --dir ~/clusterconfigs create manifests INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated 3.11.2. Optional: Configuring NTP for disconnected clusters OpenShift Container Platform installs the chrony Network Time Protocol (NTP) service on the cluster nodes. OpenShift Container Platform nodes must agree on a date and time to run properly. When worker nodes retrieve the date and time from the NTP servers on the control plane nodes, it enables the installation and operation of clusters that are not connected to a routable network and thereby do not have access to a higher stratum NTP server. Procedure Install Butane on your installation host by using the following command: USD sudo dnf -y install butane Create a Butane config, 99-master-chrony-conf-override.bu , including the contents of the chrony.conf file for the control plane nodes. Note See "Creating machine configs with Butane" for information about Butane. Butane config example variant: openshift version: 4.13.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-master-chrony-conf-override.yaml , containing the configuration to be delivered to the control plane nodes: USD butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml Create a Butane config, 99-worker-chrony-conf-override.bu , including the contents of the chrony.conf file for the worker nodes that references the NTP servers on the control plane nodes. Butane config example variant: openshift version: 4.13.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony 1 You must replace <cluster-name> with the name of the cluster and replace <domain> with the fully qualified domain name. Use Butane to generate a MachineConfig object file, 99-worker-chrony-conf-override.yaml , containing the configuration to be delivered to the worker nodes: USD butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml 3.11.3. Configuring network components to run on the control plane You can configure networking components to run exclusively on the control plane nodes. By default, OpenShift Container Platform allows any node in the machine config pool to host the ingressVIP virtual IP address. However, some environments deploy worker nodes in separate subnets from the control plane nodes, which requires configuring the ingressVIP virtual IP address to run on the control plane nodes. Important When deploying remote workers in separate subnets, you must place the ingressVIP virtual IP address exclusively with the control plane nodes. Procedure Change to the directory storing the install-config.yaml file: USD cd ~/clusterconfigs Switch to the manifests subdirectory: USD cd manifests Create a file named cluster-network-avoid-workers-99-config.yaml : USD touch cluster-network-avoid-workers-99-config.yaml Open the cluster-network-avoid-workers-99-config.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:, This manifest places the ingressVIP virtual IP address on the control plane nodes. Additionally, this manifest deploys the following processes on the control plane nodes only: openshift-ingress-operator keepalived Save the cluster-network-avoid-workers-99-config.yaml file. Create a manifests/cluster-ingress-default-ingresscontroller.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: "" Consider backing up the manifests directory. The installer deletes the manifests/ directory when creating the cluster. Modify the cluster-scheduler-02-config.yml manifest to make the control plane nodes schedulable by setting the mastersSchedulable field to true . Control plane nodes are not schedulable by default. For example: Note If control plane nodes are not schedulable after completing this procedure, deploying the cluster will fail. 3.11.4. Optional: Deploying routers on worker nodes During installation, the installer deploys router pods on worker nodes. By default, the installer installs two router pods. If a deployed cluster requires additional routers to handle external traffic loads destined for services within the OpenShift Container Platform cluster, you can create a yaml file to set an appropriate number of router replicas. Important Deploying a cluster with only one worker node is not supported. While modifying the router replicas will address issues with the degraded state when deploying with one worker, the cluster loses high availability for the ingress API, which is not suitable for production environments. Note By default, the installer deploys two routers. If the cluster has no worker nodes, the installer deploys the two routers on the control plane nodes by default. Procedure Create a router-replicas.yaml file: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: "" Note Replace <num-of-router-pods> with an appropriate value. If working with just one worker node, set replicas: to 1 . If working with more than 3 worker nodes, you can increase replicas: from the default value 2 as appropriate. Save and copy the router-replicas.yaml file to the clusterconfigs/openshift directory: USD cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml 3.11.5. Optional: Configuring the BIOS The following procedure configures the BIOS during the installation process. Procedure Create the manifests. Modify the BareMetalHost resource file corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Add the BIOS configuration to the spec section of the BareMetalHost resource: spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true Note Red Hat supports three BIOS configurations. Only servers with BMC type irmc are supported. Other types of servers are currently not supported. Create the cluster. Additional resources Bare metal configuration 3.11.6. Optional: Configuring the RAID The following procedure configures a redundant array of independent disks (RAID) during the installation process. Note OpenShift Container Platform supports hardware RAID for baseboard management controllers (BMCs) using the iRMC protocol only. OpenShift Container Platform 4.13 does not support software RAID. If you want to configure a hardware RAID for the node, verify that the node has a RAID controller. Procedure Create the manifests. Modify the BareMetalHost resource corresponding to the node: USD vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml Note The following example uses a hardware RAID configuration because OpenShift Container Platform 4.13 does not support software RAID. If you added a specific RAID configuration to the spec section, this causes the node to delete the original RAID configuration in the preparing phase and perform a specified configuration on the RAID. For example: spec: raid: hardwareRAIDVolumes: - level: "0" 1 name: "sda" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0 1 level is a required field, and the others are optional fields. If you added an empty RAID configuration to the spec section, the empty configuration causes the node to delete the original RAID configuration during the preparing phase, but does not perform a new configuration. For example: spec: raid: hardwareRAIDVolumes: [] If you do not add a raid field in the spec section, the original RAID configuration is not deleted, and no new configuration will be performed. Create the cluster. 3.11.7. Optional: Configuring storage on nodes You can make changes to operating systems on OpenShift Container Platform nodes by creating MachineConfig objects that are managed by the Machine Config Operator (MCO). The MachineConfig specification includes an ignition config for configuring the machines at first boot. This config object can be used to modify files, systemd services, and other operating system features running on OpenShift Container Platform machines. Procedure Use the ignition config to configure storage on nodes. The following MachineSet manifest example demonstrates how to add a partition to a device on a primary node. In this example, apply the manifest before installation to have a partition named recovery with a size of 16 GiB on the primary node. Create a custom-partitions.yaml file and include a MachineConfig object that contains your partition layout: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs Save and copy the custom-partitions.yaml file to the clusterconfigs/openshift directory: USD cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift Additional resources Bare metal configuration Partition naming scheme 3.12. Creating a disconnected registry In some cases, you might want to install an OpenShift Container Platform cluster using a local copy of the installation registry. This could be for enhancing network efficiency because the cluster nodes are on a network that does not have access to the internet. A local, or mirrored, copy of the registry requires the following: A certificate for the registry node. This can be a self-signed certificate. A web server that a container on a system will serve. An updated pull secret that contains the certificate and local repository information. Note Creating a disconnected registry on a registry node is optional. If you need to create a disconnected registry on a registry node, you must complete all of the following sub-sections. Prerequisites If you have already prepared a mirror registry for Mirroring images for a disconnected installation , you can skip directly to Modify the install-config.yaml file to use the disconnected registry . 3.12.1. Preparing the registry node to host the mirrored registry The following steps must be completed prior to hosting a mirrored registry on bare metal. Procedure Open the firewall port on the registry node: USD sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent USD sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent USD sudo firewall-cmd --reload Install the required packages for the registry node: USD sudo yum -y install python3 podman httpd httpd-tools jq Create the directory structure where the repository information will be held: USD sudo mkdir -p /opt/registry/{auth,certs,data} 3.12.2. Mirroring the OpenShift Container Platform image repository for a disconnected registry Complete the following steps to mirror the OpenShift Container Platform image repository for a disconnected registry. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from the Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. Procedure Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your cluster: USD ARCHITECTURE=<cluster_architecture> 1 1 Specify the architecture of the cluster, such as x86_64 , aarch64 , s390x , or ppc64le . Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-baremetal-install 3.12.3. Modify the install-config.yaml file to use the disconnected registry On the provisioner node, the install-config.yaml file should use the newly created pull-secret from the pull-secret-update.txt file. The install-config.yaml file must also contain the disconnected registry node's certificate and registry information. Procedure Add the disconnected registry node's certificate to the install-config.yaml file: USD echo "additionalTrustBundle: |" >> install-config.yaml The certificate should follow the "additionalTrustBundle: |" line and be properly indented, usually by two spaces. USD sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml Add the mirror information for the registry to the install-config.yaml file: USD echo "imageContentSources:" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-release" >> install-config.yaml USD echo "- mirrors:" >> install-config.yaml USD echo " - registry.example.com:5000/ocp4/openshift4" >> install-config.yaml Replace registry.example.com with the registry's fully qualified domain name. USD echo " source: quay.io/openshift-release-dev/ocp-v4.0-art-dev" >> install-config.yaml 3.13. Assigning a static IP address to the bootstrap VM If you are deploying OpenShift Container Platform without a DHCP server on the baremetal network, you must configure a static IP address for the bootstrap VM using Ignition. Procedure Create the ignition configuration files: USD ./openshift-baremetal-install --dir <cluster_configs> create ignition-configs Replace <cluster_configs> with the path to your cluster configuration files. Create the bootstrap_config.sh file: #!/bin/bash BOOTSTRAP_CONFIG="[connection] type=ethernet interface-name=ens3 [ethernet] [ipv4] method=manual addresses=<ip_address>/<cidr> gateway=<gateway_ip_address> dns=<dns_ip_address>" cat <<_EOF_ > bootstrap_network_config.ign { "path": "/etc/NetworkManager/system-connections/ens3.nmconnection", "mode": 384, "contents": { "source": "data:text/plain;charset=utf-8;base64,USD(echo "USD{BOOTSTRAP_CONFIG}" | base64 -w 0)" } } _EOF_ mv <cluster_configs>/bootstrap.ign <cluster_configs>/bootstrap.ign.orig jq '.storage.files += USDinput' <cluster_configs>/bootstrap.ign.orig --slurpfile input bootstrap_network_config.ign > <cluster_configs>/bootstrap.ign Replace <ip_address> and <cidr> with the IP address and CIDR of the address range. Replace <gateway_ip_address> with the IP address of the gateway on the baremetal network. Replace <dns_ip_address> with the IP address of the DNS server on the baremetal network. Replace <cluster_configs> with the path to your cluster configuration files. Make the bootstrap_config.sh file executable: USD chmod 755 bootstrap_config.sh Run the bootstrap_config.sh script to create the bootstrap_network_config.ign file: USD ./bootstrap_config.sh 3.14. Validation checklist for installation ❏ OpenShift Container Platform installer has been retrieved. ❏ OpenShift Container Platform installer has been extracted. ❏ Required parameters for the install-config.yaml have been configured. ❏ The hosts parameter for the install-config.yaml has been configured. ❏ The bmc parameter for the install-config.yaml has been configured. ❏ Conventions for the values configured in the bmc address field have been applied. ❏ Created the OpenShift Container Platform manifests. ❏ (Optional) Deployed routers on worker nodes. ❏ (Optional) Created a disconnected registry. ❏ (Optional) Validate disconnected registry settings if in use. | [
"useradd kni",
"passwd kni",
"echo \"kni ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/kni",
"chmod 0440 /etc/sudoers.d/kni",
"su - kni -c \"ssh-keygen -t ed25519 -f /home/kni/.ssh/id_rsa -N ''\"",
"su - kni",
"sudo subscription-manager register --username=<user> --password=<pass> --auto-attach",
"sudo subscription-manager repos --enable=rhel-9-for-<architecture>-appstream-rpms --enable=rhel-9-for-<architecture>-baseos-rpms",
"sudo dnf install -y libvirt qemu-kvm mkisofs python3-devel jq ipmitool",
"sudo usermod --append --groups libvirt <user>",
"sudo systemctl start firewalld",
"sudo firewall-cmd --zone=public --add-service=http --permanent",
"sudo firewall-cmd --reload",
"sudo systemctl enable libvirtd --now",
"sudo virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images",
"sudo virsh pool-start default",
"sudo virsh pool-autostart default",
"vim pull-secret.txt",
"chronyc sources",
"MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^+ time.cloudflare.com 3 10 377 187 -209us[ -209us] +/- 32ms ^+ t1.time.ir2.yahoo.com 2 10 377 185 -4382us[-4382us] +/- 23ms ^+ time.cloudflare.com 3 10 377 198 -996us[-1220us] +/- 33ms ^* brenbox.westnet.ie 1 10 377 193 -9538us[-9761us] +/- 24ms",
"ping time.cloudflare.com",
"PING time.cloudflare.com (162.159.200.123) 56(84) bytes of data. 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=1 ttl=54 time=32.3 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=2 ttl=54 time=30.9 ms 64 bytes from time.cloudflare.com (162.159.200.123): icmp_seq=3 ttl=54 time=36.7 ms",
"export PUB_CONN=<baremetal_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge <con_name> baremetal bridge.stp no 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal pkill dhclient;dhclient baremetal \"",
"sudo nohup bash -c \" nmcli con down \\\"USDPUB_CONN\\\" nmcli con delete \\\"USDPUB_CONN\\\" # RHEL 8.1 appends the word \\\"System\\\" in front of the connection, delete in case it exists nmcli con down \\\"System USDPUB_CONN\\\" nmcli con delete \\\"System USDPUB_CONN\\\" nmcli connection add ifname baremetal type bridge con-name baremetal bridge.stp no ipv4.method manual ipv4.addr \"x.x.x.x/yy\" ipv4.gateway \"a.a.a.a\" ipv4.dns \"b.b.b.b\" 1 nmcli con add type bridge-slave ifname \\\"USDPUB_CONN\\\" master baremetal nmcli con up baremetal \"",
"export PROV_CONN=<prov_nic_name>",
"sudo nohup bash -c \" nmcli con down \\\"USDPROV_CONN\\\" nmcli con delete \\\"USDPROV_CONN\\\" nmcli connection add ifname provisioning type bridge con-name provisioning nmcli con add type bridge-slave ifname \\\"USDPROV_CONN\\\" master provisioning nmcli connection modify provisioning ipv6.addresses fd00:1101::1/64 ipv6.method manual nmcli con down provisioning nmcli con up provisioning \"",
"nmcli connection modify provisioning ipv4.addresses 172.22.0.254/24 ipv4.method manual",
"ssh kni@provisioner.<cluster-name>.<domain>",
"sudo nmcli con show",
"NAME UUID TYPE DEVICE baremetal 4d5133a5-8351-4bb9-bfd4-3af264801530 bridge baremetal provisioning 43942805-017f-4d7d-a2c2-7cb3324482ed bridge provisioning virbr0 d9bca40f-eee1-410b-8879-a2d4bb0465e7 bridge virbr0 bridge-slave-eno1 76a8ed50-c7e5-4999-b4f6-6d9014dd0812 ethernet eno1 bridge-slave-eno2 f31c3353-54b7-48de-893a-02d2b34c4736 ethernet eno2",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"192.168.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"192.168.0.0/24 via 192.168.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"sudo su -",
"nmcli dev status",
"nmcli connection modify <interface_name> +ipv4.routes \"10.0.0.0/24 via <gateway>\"",
"nmcli connection modify eth0 +ipv4.routes \"10.0.0.0/24 via 10.0.0.1\"",
"nmcli connection up <interface_name>",
"ip route",
"ping <remote_worker_node_ip_address>",
"ping <control_plane_node_ip_address>",
"export VERSION=stable-4.13",
"export RELEASE_ARCH=<architecture>",
"export RELEASE_IMAGE=USD(curl -s https://mirror.openshift.com/pub/openshift-v4/USDRELEASE_ARCH/clients/ocp/USDVERSION/release.txt | grep 'Pull From: quay.io' | awk -F ' ' '{print USD3}')",
"export cmd=openshift-baremetal-install",
"export pullsecret_file=~/pull-secret.txt",
"export extract_dir=USD(pwd)",
"curl -s https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDVERSION/openshift-client-linux.tar.gz | tar zxvf - oc",
"sudo cp oc /usr/local/bin",
"oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=USDcmd --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}",
"sudo cp openshift-baremetal-install /usr/local/bin",
"sudo dnf install -y podman",
"sudo firewall-cmd --add-port=8080/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"mkdir /home/kni/rhcos_image_cache",
"sudo semanage fcontext -a -t httpd_sys_content_t \"/home/kni/rhcos_image_cache(/.*)?\"",
"sudo restorecon -Rv /home/kni/rhcos_image_cache/",
"export RHCOS_QEMU_URI=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk.location')",
"export RHCOS_QEMU_NAME=USD{RHCOS_QEMU_URI##*/}",
"export RHCOS_QEMU_UNCOMPRESSED_SHA256=USD(/usr/local/bin/openshift-baremetal-install coreos print-stream-json | jq -r --arg ARCH \"USD(arch)\" '.architectures[USDARCH].artifacts.qemu.formats[\"qcow2.gz\"].disk[\"uncompressed-sha256\"]')",
"curl -L USD{RHCOS_QEMU_URI} -o /home/kni/rhcos_image_cache/USD{RHCOS_QEMU_NAME}",
"ls -Z /home/kni/rhcos_image_cache",
"podman run -d --name rhcos_image_cache \\ 1 -v /home/kni/rhcos_image_cache:/var/www/html -p 8080:8080/tcp registry.access.redhat.com/ubi9/httpd-24",
"export BAREMETAL_IP=USD(ip addr show dev baremetal | awk '/inet /{print USD2}' | cut -d\"/\" -f1)",
"export BOOTSTRAP_OS_IMAGE=\"http://USD{BAREMETAL_IP}:8080/USD{RHCOS_QEMU_NAME}?sha256=USD{RHCOS_QEMU_UNCOMPRESSED_SHA256}\"",
"echo \" bootstrapOSImage=USD{BOOTSTRAP_OS_IMAGE}\"",
"platform: baremetal: bootstrapOSImage: <bootstrap_os_image> 1",
"apiVersion: v1 baseDomain: <domain> metadata: name: <cluster_name> networking: machineNetwork: - cidr: <public_cidr> networkType: OVNKubernetes compute: - name: worker replicas: 2 1 controlPlane: name: master replicas: 3 platform: baremetal: {} platform: baremetal: apiVIPs: - <api_ip> ingressVIPs: - <wildcard_ip> provisioningNetworkCIDR: <CIDR> bootstrapExternalStaticIP: <bootstrap_static_ip_address> 2 bootstrapExternalStaticGateway: <bootstrap_static_gateway> 3 hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out_of_band_ip> 4 username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" 5 - name: <openshift_master_1> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_master_2> role: master bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" - name: <openshift_worker_0> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> - name: <openshift_worker_1> role: worker bmc: address: ipmi://<out_of_band_ip> username: <user> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"<installation_disk_drive_path>\" pullSecret: '<pull_secret>' sshKey: '<ssh_pub_key>'",
"ironic-inspector inspection failed: No disks satisfied root device hints",
"mkdir ~/clusterconfigs",
"cp install-config.yaml ~/clusterconfigs",
"ipmitool -I lanplus -U <user> -P <password> -H <management-server-ip> power off",
"for i in USD(sudo virsh list | tail -n +3 | grep bootstrap | awk {'print USD2'}); do sudo virsh destroy USDi; sudo virsh undefine USDi; sudo virsh vol-delete USDi --pool USDi; sudo virsh vol-delete USDi.ign --pool USDi; sudo virsh pool-destroy USDi; sudo virsh pool-undefine USDi; done",
"metadata: name:",
"networking: machineNetwork: - cidr:",
"compute: - name: worker",
"compute: replicas: 2",
"controlPlane: name: master",
"controlPlane: replicas: 3",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: ipmi://<out-of-band-ip> username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"On\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X POST -H'Content-Type: application/json' -H'Accept: application/json' -d '{\"ResetType\": \"ForceOff\"}' https://USDSERVER/redfish/v1/Systems/USDSystemID/Actions/ComputerSystem.Reset",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"pxe\", \"BootSourceOverrideEnabled\": \"Once\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideMode\":\"UEFI\"}}",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" https://USDServer/redfish/v1/Systems/USDSystemID/ -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"cd\", \"BootSourceOverrideEnabled\": \"Once\"}}'",
"curl -u USDUSER:USDPASS -X PATCH -H \"Content-Type: application/json\" -H \"If-Match: *\" https://USDServer/redfish/v1/Managers/USDManagerID/VirtualMedia/USDVmediaId -d '{\"Image\": \"https://example.com/test.iso\", \"TransferProtocolType\": \"HTTPS\", \"UserName\": \"\", \"Password\":\"\"}'",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: idrac-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/System.Embedded.1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish-virtualmedia://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out-of-band-ip>/redfish/v1/Systems/1 username: <user> password: <password> disableCertificateVerification: True",
"platform: baremetal: hosts: - name: <hostname> role: <master | worker> bmc: address: <address> 1 username: <user> password: <password>",
"platform: baremetal: hosts: - name: openshift-master-0 role: master bmc: address: irmc://<out-of-band-ip> username: <user> password: <password>",
"- name: master-0 role: master bmc: address: ipmi://10.10.0.3:6203 username: admin password: redhat bootMACAddress: de:ad:be:ef:00:40 rootDeviceHints: deviceName: \"/dev/sda\"",
"apiVersion: v1 baseDomain: <domain> proxy: httpProxy: http://USERNAME:[email protected]:PORT httpsProxy: https://USERNAME:[email protected]:PORT noProxy: <WILDCARD_OF_DOMAIN>,<PROVISIONING_NETWORK/CIDR>,<BMC_ADDRESS_RANGE/CIDR>",
"noProxy: .example.com,172.22.0.0/24,10.10.0.0/24",
"platform: baremetal: apiVIPs: - <api_VIP> ingressVIPs: - <ingress_VIP> provisioningNetwork: \"Disabled\" 1",
"machineNetwork: - cidr: {{ extcidrnet }} - cidr: {{ extcidrnet6 }} clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd02::/48 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd03::/112",
"networkConfig: nmstate: interfaces: - name: <interface_name> wait-ip: ipv4+ipv6",
"platform: baremetal: apiVIPs: - <api_ipv4> - <api_ipv6> ingressVIPs: - <wildcard_ipv4> - <wildcard_ipv6>",
"interfaces: - name: <nic1_name> 1 type: ethernet state: up ipv4: address: - ip: <ip_address> 2 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 3 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 4 next-hop-interface: <next_hop_nic1_name> 5",
"nmstatectl gc <nmstate_yaml_file>",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: 1 interfaces: - name: <nic1_name> 2 type: ethernet state: up ipv4: address: - ip: <ip_address> 3 prefix-length: 24 enabled: true dns-resolver: config: server: - <dns_ip_address> 4 routes: config: - destination: 0.0.0.0/0 next-hop-address: <next_hop_ip_address> 5 next-hop-interface: <next_hop_nic1_name> 6",
"networking: machineNetwork: - cidr: 10.0.0.0/24 - cidr: 192.168.0.0/24 networkType: OVNKubernetes",
"networkConfig: interfaces: - name: <interface_name> 1 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: <node_ip> 2 prefix-length: 24 gateway: <gateway_ip> 3 dns-resolver: config: server: - <dns_ip> 4",
"interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"nmstatectl gc <nmstate_yaml_file> 1",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: null bootMACAddress: <NIC1_mac_address> bootMode: UEFI rootDeviceHints: deviceName: \"/dev/sda\" networkConfig: interfaces: - name: eth0 ipv6: addr-gen-mode: <address_mode> 1",
"hosts: - name: worker-0 role: worker bmc: address: redfish+http://<out_of_band_ip>/redfish/v1/Systems/ username: <user> password: <password> disableCertificateVerification: false bootMACAddress: <NIC1_mac_address> bootMode: UEFI networkConfig: 1 interfaces: 2 - name: eno1 3 type: ethernet 4 state: up mac-address: 0c:42:a1:55:f3:06 ipv4: enabled: true dhcp: false 5 ethernet: sr-iov: total-vfs: 2 6 ipv6: enabled: false dhcp: false - name: sriov:eno1:0 type: ethernet state: up 7 ipv4: enabled: false 8 ipv6: enabled: false - name: sriov:eno1:1 type: ethernet state: down - name: eno2 type: ethernet state: up mac-address: 0c:42:a1:55:f3:07 ipv4: enabled: true ethernet: sr-iov: total-vfs: 2 ipv6: enabled: false - name: sriov:eno2:0 type: ethernet state: up ipv4: enabled: false ipv6: enabled: false - name: sriov:eno2:1 type: ethernet state: down - name: bond0 type: bond state: up min-tx-rate: 100 9 max-tx-rate: 200 10 link-aggregation: mode: active-backup 11 options: primary: sriov:eno1:0 12 port: - sriov:eno1:0 - sriov:eno2:0 ipv4: address: - ip: 10.19.16.57 13 prefix-length: 23 dhcp: false enabled: true ipv6: enabled: false dns-resolver: config: server: - 10.11.5.160 - 10.2.70.215 routes: config: - destination: 0.0.0.0/0 next-hop-address: 10.19.17.254 next-hop-interface: bond0 14 table-id: 254",
"hosts: - name: ostest-master-0 [...] networkConfig: &BOND interfaces: - name: bond0 type: bond state: up ipv4: dhcp: true enabled: true link-aggregation: mode: active-backup port: - enp2s0 - enp3s0 - name: ostest-master-1 [...] networkConfig: *BOND - name: ostest-master-2 [...] networkConfig: *BOND",
"hosts: - name: openshift-master-0 role: master bmc: address: redfish://<out_of_band_ip> 1 username: <username> password: <password> bootMACAddress: <NIC1_mac_address> rootDeviceHints: deviceName: \"/dev/sda\" bootMode: UEFISecureBoot 2",
"./openshift-baremetal-install --dir ~/clusterconfigs create manifests",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings WARNING Discarding the OpenShift Manifest that was provided in the target directory because its dependencies are dirty and it needs to be regenerated",
"sudo dnf -y install butane",
"variant: openshift version: 4.13.0 metadata: name: 99-master-chrony-conf-override labels: machineconfiguration.openshift.io/role: master storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # Use public servers from the pool.ntp.org project. # Please consider joining the pool (https://www.pool.ntp.org/join.html). # The Machine Config Operator manages this file server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony # Configure the control plane nodes to serve as local NTP servers # for all worker nodes, even if they are not in sync with an # upstream NTP server. # Allow NTP client access from the local network. allow all # Serve time even if not synchronized to a time source. local stratum 3 orphan",
"butane 99-master-chrony-conf-override.bu -o 99-master-chrony-conf-override.yaml",
"variant: openshift version: 4.13.0 metadata: name: 99-worker-chrony-conf-override labels: machineconfiguration.openshift.io/role: worker storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | # The Machine Config Operator manages this file. server openshift-master-0.<cluster-name>.<domain> iburst 1 server openshift-master-1.<cluster-name>.<domain> iburst server openshift-master-2.<cluster-name>.<domain> iburst stratumweight 0 driftfile /var/lib/chrony/drift rtcsync makestep 10 3 bindcmdaddress 127.0.0.1 bindcmdaddress ::1 keyfile /etc/chrony.keys commandkey 1 generatecommandkey noclientlog logchange 0.5 logdir /var/log/chrony",
"butane 99-worker-chrony-conf-override.bu -o 99-worker-chrony-conf-override.yaml",
"cd ~/clusterconfigs",
"cd manifests",
"touch cluster-network-avoid-workers-99-config.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-worker-fix-ipi-rwn labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/kubernetes/manifests/keepalived.yaml mode: 0644 contents: source: data:,",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/master: \"\"",
"sed -i \"s;mastersSchedulable: false;mastersSchedulable: true;g\" clusterconfigs/manifests/cluster-scheduler-02-config.yml",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: <num-of-router-pods> endpointPublishingStrategy: type: HostNetwork nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\"",
"cp ~/router-replicas.yaml clusterconfigs/openshift/99_router-replicas.yaml",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: firmware: simultaneousMultithreadingEnabled: true sriovEnabled: true virtualizationEnabled: true",
"vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-*.yaml",
"spec: raid: hardwareRAIDVolumes: - level: \"0\" 1 name: \"sda\" numberOfPhysicalDisks: 1 rotational: true sizeGibibytes: 0",
"spec: raid: hardwareRAIDVolumes: []",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: primary name: 10_primary_storage_config spec: config: ignition: version: 3.2.0 storage: disks: - device: </dev/xxyN> partitions: - label: recovery startMiB: 32768 sizeMiB: 16384 filesystems: - device: /dev/disk/by-partlabel/recovery label: recovery format: xfs",
"cp ~/<MachineConfig_manifest> ~/clusterconfigs/openshift",
"sudo firewall-cmd --add-port=5000/tcp --zone=libvirt --permanent",
"sudo firewall-cmd --add-port=5000/tcp --zone=public --permanent",
"sudo firewall-cmd --reload",
"sudo yum -y install python3 podman httpd httpd-tools jq",
"sudo mkdir -p /opt/registry/{auth,certs,data}",
"OCP_RELEASE=<release_version>",
"LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'",
"LOCAL_REPOSITORY='<local_repository_name>'",
"PRODUCT_REPO='openshift-release-dev'",
"LOCAL_SECRET_JSON='<path_to_pull_secret>'",
"RELEASE_NAME=\"ocp-release\"",
"ARCHITECTURE=<cluster_architecture> 1",
"REMOVABLE_MEDIA_PATH=<path> 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1",
"oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"",
"oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-baremetal-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"",
"openshift-baremetal-install",
"echo \"additionalTrustBundle: |\" >> install-config.yaml",
"sed -e 's/^/ /' /opt/registry/certs/domain.crt >> install-config.yaml",
"echo \"imageContentSources:\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-release\" >> install-config.yaml",
"echo \"- mirrors:\" >> install-config.yaml",
"echo \" - registry.example.com:5000/ocp4/openshift4\" >> install-config.yaml",
"echo \" source: quay.io/openshift-release-dev/ocp-v4.0-art-dev\" >> install-config.yaml",
"./openshift-baremetal-install --dir <cluster_configs> create ignition-configs",
"#!/bin/bash BOOTSTRAP_CONFIG=\"[connection] type=ethernet interface-name=ens3 [ethernet] [ipv4] method=manual addresses=<ip_address>/<cidr> gateway=<gateway_ip_address> dns=<dns_ip_address>\" cat <<_EOF_ > bootstrap_network_config.ign { \"path\": \"/etc/NetworkManager/system-connections/ens3.nmconnection\", \"mode\": 384, \"contents\": { \"source\": \"data:text/plain;charset=utf-8;base64,USD(echo \"USD{BOOTSTRAP_CONFIG}\" | base64 -w 0)\" } } _EOF_ mv <cluster_configs>/bootstrap.ign <cluster_configs>/bootstrap.ign.orig jq '.storage.files += USDinput' <cluster_configs>/bootstrap.ign.orig --slurpfile input bootstrap_network_config.ign > <cluster_configs>/bootstrap.ign",
"chmod 755 bootstrap_config.sh",
"./bootstrap_config.sh"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-installation-workflow |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.2/proc-providing-feedback-on-redhat-documentation |
Chapter 2. Eclipse Temurin features | Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 8 release of Eclipse Temurin includes, see OpenJDK 8u442 Released . New features and enhancements Eclipse Temurin 8.0.442 includes the following new features and enhancements. Option for jar command to avoid overwriting files when extracting an archive In earlier OpenJDK releases, when the jar tool extracted files from an archive, the jar tool overwrote any existing files with the same name in the target directory. OpenJDK 8.0.442 adds a new ‐k option that you can use to ensure that the jar tool does not overwrite existing files. For example: Note In OpenJDK 8.0.442, the jar tool retains the old behavior by default. If you do not explicitly specify the ‐k option, the jar tool automatically overwrites any existing files with the same name. See JDK-8335912 (JDK Bug System) and JDK bug system reference ID: JDK-8337499. Revised on 2025-01-30 11:19:31 UTC | [
"jar xkf myfile.jar"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.442_release_notes/openjdk-temurin-features-8.0.442_openjdk |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in four versions: 8u, 11u, 17u, and 21u. Packages for the Red Hat build of OpenJDK are available on Red Hat Enterprise Linux and Microsoft Windows platforms and shipped as a JDK and a JRE in the Red Hat Ecosystem Catalog. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.412/pr01 |
4.197. nmap | 4.197. nmap 4.197.1. RHBA-2011:0967 - nmap bug fix update An updated nmap package that fixes one bug is now available for Red Hat Enterprise Linux 6. The nmap package provides a network exploration utility and a security scanner. Bug Fix BZ# 621045 Prior to this update, the output of the "nmap -h" (or "nmap --help") command did not describe all the available nmap options that begin with the "-s" or "-P" prefix. As a result, a user could have been unable to research what options can be used to perform specific tasks with nmap. With this update, the bug has been fixed so that the output of "nmap -h" now describes all the aforementioned nmap options that were previously missing from the output. All users of nmap are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/nmap |
Chapter 7. alarm | Chapter 7. alarm This chapter describes the commands under the alarm command. 7.1. alarm create Create an alarm Usage: Table 7.1. Optional Arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm -t <TYPE>, --type <TYPE> Type of alarm, should be one of: event, composite, gnocchi_resources_threshold, gnocchi_aggregation_by_metrics_threshold, gnocchi_aggregation_by_resources_threshold. --project-id <PROJECT_ID> Project to associate with alarm (configurable by admin users only) --user-id <USER_ID> User to associate with alarm (configurable by admin users only) --description <DESCRIPTION> Free text description of the alarm --state <STATE> State of the alarm, one of: [ ok , alarm , insufficient data ] --severity <SEVERITY> Severity of the alarm, one of: [ low , moderate , critical ] --enabled {True False} True if alarm evaluation is enabled --alarm-action <Webhook URL> Url to invoke when state transitions to alarm. may be used multiple times --ok-action <Webhook URL> Url to invoke when state transitions to ok. may be used multiple times --insufficient-data-action <Webhook URL> Url to invoke when state transitions to insufficient data. May be used multiple times --time-constraint <Time Constraint> Only evaluate the alarm if the time at evaluation is within this time constraint. Start point(s) of the constraint are specified with a cron expression, whereas its duration is given in seconds. Can be specified multiple times for multiple time constraints, format is: name=<CONSTRAINT_NAME>;start=< CRON>;duration=<SECONDS>;[description=<DESCRIPTION>;[t imezone=<IANA Timezone>]] --repeat-actions {True False} True if actions should be repeatedly notified while alarm remains in target state Table 7.2. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 7.3. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 7.4. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 7.5. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. Table 7.6. common alarm rules Value Summary --query <QUERY> For alarms of type event: key[op]data_type::value; list. data_type is optional, but if supplied must be string, integer, float, or boolean. For alarms of type gnocchi_aggregation_by_resources_threshold: need to specify a complex query json string, like: {"and": [{"=": {"ended_at": null}}, ... ]}. --comparison-operator <OPERATOR> Operator to compare with, one of: [ lt , le , eq , ne , ge , gt ] --evaluation-periods <EVAL_PERIODS> Number of periods to evaluate over --threshold <THRESHOLD> Threshold to evaluate against. Table 7.7. event alarm Value Summary --event-type <EVENT_TYPE> Event type to evaluate against Table 7.8. common gnocchi alarm rules Value Summary --granularity <GRANULARITY> The time range in seconds over which to query. --aggregation-method <AGGR_METHOD> The aggregation_method to compare to the threshold. --metric <METRIC>, --metrics <METRIC> The metric id or name depending of the alarm type Table 7.9. gnocchi resource threshold alarm Value Summary --resource-type <RESOURCE_TYPE> The type of resource. --resource-id <RESOURCE_ID> The id of a resource. Table 7.10. composite alarm Value Summary --composite-rule <COMPOSITE_RULE> Composite threshold rule with json format, the form can be a nested dict which combine gnocchi rules by "and", "or". For example, the form is like: {"or":[RULE1, RULE2, {"and": [RULE3, RULE4]}]}. 7.2. alarm delete Delete an alarm Usage: Table 7.11. Positional Arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 7.12. Optional Arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm 7.3. alarm-history search Show history for all alarms based on query Usage: Table 7.13. Optional Arguments Value Summary -h, --help Show this help message and exit --query QUERY Rich query supported by aodh, e.g. project_id!=my-id user_id=foo or user_id=bar Table 7.14. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 7.15. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 7.16. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 7.17. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.4. alarm-history show Show history for an alarm Usage: Table 7.18. Positional Arguments Value Summary <alarm-id> Id of an alarm Table 7.19. Optional Arguments Value Summary -h, --help Show this help message and exit --limit <LIMIT> Number of resources to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value,the supported marker is event_id. --sort <SORT_KEY:SORT_DIR> Sort of resource attribute. e.g. timestamp:desc Table 7.20. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 7.21. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 7.22. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 7.23. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.5. alarm list List alarms Usage: Table 7.24. Optional Arguments Value Summary -h, --help Show this help message and exit --query QUERY Rich query supported by aodh, e.g. project_id!=my-id user_id=foo or user_id=bar --filter <KEY1=VALUE1;KEY2=VALUE2... > Filter parameters to apply on returned alarms. --limit <LIMIT> Number of resources to return (default is server default) --marker <MARKER> Last item of the listing. return the results after this value,the supported marker is alarm_id. --sort <SORT_KEY:SORT_DIR> Sort of resource attribute, e.g. name:asc Table 7.25. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 7.26. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 7.27. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 7.28. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.6. alarm show Show an alarm Usage: Table 7.29. Positional Arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 7.30. Optional Arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm Table 7.31. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 7.32. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 7.33. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 7.34. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.7. alarm state get Get state of an alarm Usage: Table 7.35. Positional Arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 7.36. Optional Arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm Table 7.37. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 7.38. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 7.39. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 7.40. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.8. alarm state set Set state of an alarm Usage: Table 7.41. Positional Arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 7.42. Optional Arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm --state <STATE> State of the alarm, one of: [ ok , alarm , insufficient data ] Table 7.43. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 7.44. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 7.45. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 7.46. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 7.9. alarm update Update an alarm Usage: Table 7.47. Positional Arguments Value Summary <ALARM ID or NAME> Id or name of an alarm. Table 7.48. Optional Arguments Value Summary -h, --help Show this help message and exit --name <NAME> Name of the alarm -t <TYPE>, --type <TYPE> Type of alarm, should be one of: event, composite, gnocchi_resources_threshold, gnocchi_aggregation_by_metrics_threshold, gnocchi_aggregation_by_resources_threshold. --project-id <PROJECT_ID> Project to associate with alarm (configurable by admin users only) --user-id <USER_ID> User to associate with alarm (configurable by admin users only) --description <DESCRIPTION> Free text description of the alarm --state <STATE> State of the alarm, one of: [ ok , alarm , insufficient data ] --severity <SEVERITY> Severity of the alarm, one of: [ low , moderate , critical ] --enabled {True False} True if alarm evaluation is enabled --alarm-action <Webhook URL> Url to invoke when state transitions to alarm. may be used multiple times --ok-action <Webhook URL> Url to invoke when state transitions to ok. may be used multiple times --insufficient-data-action <Webhook URL> Url to invoke when state transitions to insufficient data. May be used multiple times --time-constraint <Time Constraint> Only evaluate the alarm if the time at evaluation is within this time constraint. Start point(s) of the constraint are specified with a cron expression, whereas its duration is given in seconds. Can be specified multiple times for multiple time constraints, format is: name=<CONSTRAINT_NAME>;start=< CRON>;duration=<SECONDS>;[description=<DESCRIPTION>;[t imezone=<IANA Timezone>]] --repeat-actions {True False} True if actions should be repeatedly notified while alarm remains in target state Table 7.49. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 7.50. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 7.51. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 7.52. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. Table 7.53. common alarm rules Value Summary --query <QUERY> For alarms of type event: key[op]data_type::value; list. data_type is optional, but if supplied must be string, integer, float, or boolean. For alarms of type gnocchi_aggregation_by_resources_threshold: need to specify a complex query json string, like: {"and": [{"=": {"ended_at": null}}, ... ]}. --comparison-operator <OPERATOR> Operator to compare with, one of: [ lt , le , eq , ne , ge , gt ] --evaluation-periods <EVAL_PERIODS> Number of periods to evaluate over --threshold <THRESHOLD> Threshold to evaluate against. Table 7.54. event alarm Value Summary --event-type <EVENT_TYPE> Event type to evaluate against Table 7.55. common gnocchi alarm rules Value Summary --granularity <GRANULARITY> The time range in seconds over which to query. --aggregation-method <AGGR_METHOD> The aggregation_method to compare to the threshold. --metric <METRIC>, --metrics <METRIC> The metric id or name depending of the alarm type Table 7.56. gnocchi resource threshold alarm Value Summary --resource-type <RESOURCE_TYPE> The type of resource. --resource-id <RESOURCE_ID> The id of a resource. Table 7.57. composite alarm Value Summary --composite-rule <COMPOSITE_RULE> Composite threshold rule with json format, the form can be a nested dict which combine gnocchi rules by "and", "or". For example, the form is like: {"or":[RULE1, RULE2, {"and": [RULE3, RULE4]}]}. | [
"openstack alarm create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --name <NAME> -t <TYPE> [--project-id <PROJECT_ID>] [--user-id <USER_ID>] [--description <DESCRIPTION>] [--state <STATE>] [--severity <SEVERITY>] [--enabled {True|False}] [--alarm-action <Webhook URL>] [--ok-action <Webhook URL>] [--insufficient-data-action <Webhook URL>] [--time-constraint <Time Constraint>] [--repeat-actions {True|False}] [--query <QUERY>] [--comparison-operator <OPERATOR>] [--evaluation-periods <EVAL_PERIODS>] [--threshold <THRESHOLD>] [--event-type <EVENT_TYPE>] [--granularity <GRANULARITY>] [--aggregation-method <AGGR_METHOD>] [--metric <METRIC>] [--resource-type <RESOURCE_TYPE>] [--resource-id <RESOURCE_ID>] [--composite-rule <COMPOSITE_RULE>]",
"openstack alarm delete [-h] [--name <NAME>] [<ALARM ID or NAME>]",
"openstack alarm-history search [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--query QUERY]",
"openstack alarm-history show [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT_KEY:SORT_DIR>] <alarm-id>",
"openstack alarm list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--query QUERY | --filter <KEY1=VALUE1;KEY2=VALUE2...>] [--limit <LIMIT>] [--marker <MARKER>] [--sort <SORT_KEY:SORT_DIR>]",
"openstack alarm show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <NAME>] [<ALARM ID or NAME>]",
"openstack alarm state get [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <NAME>] [<ALARM ID or NAME>]",
"openstack alarm state set [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <NAME>] --state <STATE> [<ALARM ID or NAME>]",
"openstack alarm update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--name <NAME>] [-t <TYPE>] [--project-id <PROJECT_ID>] [--user-id <USER_ID>] [--description <DESCRIPTION>] [--state <STATE>] [--severity <SEVERITY>] [--enabled {True|False}] [--alarm-action <Webhook URL>] [--ok-action <Webhook URL>] [--insufficient-data-action <Webhook URL>] [--time-constraint <Time Constraint>] [--repeat-actions {True|False}] [--query <QUERY>] [--comparison-operator <OPERATOR>] [--evaluation-periods <EVAL_PERIODS>] [--threshold <THRESHOLD>] [--event-type <EVENT_TYPE>] [--granularity <GRANULARITY>] [--aggregation-method <AGGR_METHOD>] [--metric <METRIC>] [--resource-type <RESOURCE_TYPE>] [--resource-id <RESOURCE_ID>] [--composite-rule <COMPOSITE_RULE>] [<ALARM ID or NAME>]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/alarm |
Chapter 25. FeatureFlagService | Chapter 25. FeatureFlagService 25.1. GetFeatureFlags GET /v1/featureflags 25.1.1. Description 25.1.2. Parameters 25.1.3. Return Type V1GetFeatureFlagsResponse 25.1.4. Content Type application/json 25.1.5. Responses Table 25.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetFeatureFlagsResponse 0 An unexpected error response. RuntimeError 25.1.6. Samples 25.1.7. Common object reference 25.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 25.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 25.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 25.1.7.3. V1FeatureFlag Field Name Required Nullable Type Description Format name String envVar String enabled Boolean 25.1.7.4. V1GetFeatureFlagsResponse Field Name Required Nullable Type Description Format featureFlags List of V1FeatureFlag | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/featureflagservice |
7.4. Using the Command Line Interface (CLI) | 7.4. Using the Command Line Interface (CLI) A bond is created using the bonding kernel module and a special network interface called a channel bonding interface . 7.4.1. Check if Bonding Kernel Module is Installed In Red Hat Enterprise Linux 7, the bonding module is not loaded by default. You can load the module by issuing the following command as root : This activation will not persist across system restarts. See the Red Hat Enterprise Linux Kernel Administration Guide for an explanation of persistent module loading. Note that given a correct configuration file using the BONDING_OPTS directive, the bonding module will be loaded as required and therefore does not need to be loaded separately. To display information about the module, issue the following command: See the modprobe(8) man page for more command options. 7.4.2. Create a Channel Bonding Interface To create a channel bonding interface, create a file in the /etc/sysconfig/network-scripts/ directory called ifcfg-bond N , replacing N with the number for the interface, such as 0 . The contents of the file can be based on a configuration file for whatever type of interface is getting bonded, such as an Ethernet interface. The essential differences are that the DEVICE directive is bond N , replacing N with the number for the interface, and TYPE=Bond . In addition, set BONDING_MASTER=yes . Example 7.1. Example ifcfg-bond0 Interface Configuration File An example of a channel bonding interface. DEVICE=bond0 NAME=bond0 TYPE=Bond BONDING_MASTER=yes IPADDR=192.168.1.1 PREFIX=24 ONBOOT=yes BOOTPROTO=none BONDING_OPTS=" bonding parameters separated by spaces " NM_CONTROLLED="no" The NAME directive is useful for naming the connection profile in NetworkManager . ONBOOT says whether the profile should be started when booting (or more generally, when auto-connecting a device). Important Parameters for the bonding kernel module must be specified as a space-separated list in the BONDING_OPTS=" bonding parameters " directive in the ifcfg-bond N interface file. Do not specify options for the bonding device in /etc/modprobe.d/ bonding .conf , or in the deprecated /etc/modprobe.conf file. The max_bonds parameter is not interface specific and should not be set when using ifcfg-bond N files with the BONDING_OPTS directive as this directive will cause the network scripts to create the bond interfaces as required. For further instructions and advice on configuring the bonding module and to view the list of bonding parameters, see Section 7.7, "Using Channel Bonding" . Note that if the NM_CONTROLLED="no" setting is not present, NetworkManager might override settings in this configuration file. 7.4.3. Creating Port Interfaces The channel bonding interface is the controller (also refered to as master ) and the interfaces to be bonded are referred to as the ports ( slaves ). After the channel bonding interface is created, the network interfaces to be bound together must be configured by adding the MASTER and SLAVE directives to the configuration files of the ports. The configuration files for each of the port interfaces can be nearly identical. Example 7.2. Example Port Interface Configuration File For example, if two Ethernet interfaces are being channel bonded, enp1s0 and enp2s0 , they can both look like the following example: DEVICE= device_name NAME=bond0-slave TYPE=Ethernet BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes NM_CONTROLLED="no" In this example, replace device_name with the name of the interface. Note that if more than one profile or configuration file exists with ONBOOT=yes for an interface, they may race with each other and a plain TYPE=Ethernet profile may be activated instead of a bond port. Note Note that if the NM_CONTROLLED="no" setting is not present, NetworkManager might override settings in this configuration file. 7.4.4. Activating a Channel Bond To activate a bond, open all the ports. As root , issue the following commands: Note that if editing interface files for interfaces which are currently " up " , set them down first as follows: ifdown device_name Then when complete, open all the ports, which will open the bond (provided it was not set " down " ). To make NetworkManager aware of the changes, issue a command for every changed interface as root : Alternatively, to reload all interfaces: The default behavior is for NetworkManager not to be aware of the changes and to continue using the old configuration data. This is set by the monitor-connection-files option in the NetworkManager.conf file. See the NetworkManager.conf(5) manual page for more information. To view the status of the bond interface, issue the following command: 7.4.5. Creating Multiple Bonds In Red Hat Enterprise Linux, for each bond a channel bonding interface is created including the BONDING_OPTS directive. This configuration method is used so that multiple bonding devices can have different configurations. To create multiple channel bonding interfaces, proceed as follows: Create multiple ifcfg-bond N files with the BONDING_OPTS directive; this directive will cause the network scripts to create the bond interfaces as required. Create, or edit existing, interface configuration files to be bonded and include the SLAVE directive. Assign the interfaces to be bonded, the port interfaces, to the channel bonding interfaces by means of the MASTER directive. Example 7.3. Example multiple ifcfg-bondN interface configuration files The following is an example of a channel bonding interface configuration file: DEVICE=bondN NAME=bondN TYPE=Bond BONDING_MASTER=yes IPADDR=192.168.1.1 PREFIX=24 ONBOOT=yes BOOTPROTO=none BONDING_OPTS=" bonding parameters separated by spaces " In this example, replace N with the number for the bond interface. For example, to create two bonds create two configuration files, ifcfg-bond0 and ifcfg-bond1 , with appropriate IP addresses. Create the interfaces to be bonded as per Example 7.2, "Example Port Interface Configuration File" and assign them to the bond interfaces as required using the MASTER=bond N directive. For example, continuing on from the example above, if two interfaces per bond are required, then for two bonds create four interface configuration files and assign the first two using MASTER=bond 0 and the two using MASTER=bond 1 . | [
"~]# modprobe --first-time bonding",
"~]USD modinfo bonding",
"DEVICE=bond0 NAME=bond0 TYPE=Bond BONDING_MASTER=yes IPADDR=192.168.1.1 PREFIX=24 ONBOOT=yes BOOTPROTO=none BONDING_OPTS=\" bonding parameters separated by spaces \" NM_CONTROLLED=\"no\"",
"DEVICE= device_name NAME=bond0-slave TYPE=Ethernet BOOTPROTO=none ONBOOT=yes MASTER=bond0 SLAVE=yes NM_CONTROLLED=\"no\"",
"~]# ifup ifcfg-enp1s0 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7)",
"~]# ifup ifcfg-enp2s0 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/8)",
"~]# nmcli con load /etc/sysconfig/network-scripts/ifcfg- device",
"~]# nmcli con reload",
"~]# ip link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 link/ether 52:54:00:e9:ce:d2 brd ff:ff:ff:ff:ff:ff 3: enp2s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP mode DEFAULT qlen 1000 link/ether 52:54:00:38:a6:4c brd ff:ff:ff:ff:ff:ff 4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT link/ether 52:54:00:38:a6:4c brd ff:ff:ff:ff:ff:ff",
"DEVICE=bondN NAME=bondN TYPE=Bond BONDING_MASTER=yes IPADDR=192.168.1.1 PREFIX=24 ONBOOT=yes BOOTPROTO=none BONDING_OPTS=\" bonding parameters separated by spaces \""
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-network_bonding_using_the_command_line_interface |
Chapter 6. Configuring the Shared File Systems service (manila) | Chapter 6. Configuring the Shared File Systems service (manila) With the Shared File Systems service (manila), you can provision shared file systems that multiple compute instances, bare metal nodes, or containers can consume. Cloud administrators create share types to prepare the share service and enable end users to create and manage shares. Prerequisites An end user requires at least one share type to use the Shared File Systems service. For back ends where driver_handles_share_servers=False , a cloud administrator configures the requisite networking in advance rather than dynamically in the shared file system back end. For a CephFS through NFS back end, a cloud administrator deploys Red Hat OpenStack Platform (RHOSP) director with isolated networks and environment arguments and a custom network_data file to create an isolated StorageNFS network for NFS exports. After deployment, before the overcloud is used, the administrator creates a corresponding Networking service (neutron) StorageNFS shared provider network that maps to the isolated StorageNFS network of the data center. For a Compute instance to connect to this shared provider network, the user must add an additional neutron port. 6.1. Shared File Systems service back ends When cloud administrators use Red Hat OpenStack Platform (RHOSP) director to deploy the Shared File Systems service (manila), they can choose one or more of the following supported back ends: CephFS through NFS. For more information, see Deploying the Shared File Systems service with CephFS through NFS . Native CephFS. For more information, see Deploying the Shared File Systems service with native CephFS . NetApp. For more information, see the NetApp documentation: Shared File Systems Service (Manila) . Dell EMC Unity, Dell VNX, or Dell PowerMax. For more information, see the Dell documentation: Dell EMC Manila Backend Deployment Guide for Red Hat OpenStack Platform 16 . For a complete list of supported back-end appliances and drivers, see Component, Plug-In, and Driver Support in RHEL OpenStack Platform . 6.1.1. Using multiple back ends A back end is a storage system or technology that is paired with the Shared File Systems service (manila) driver to export file systems. The Shared File Systems service requires at least one back end to operate. In many cases, one back end is sufficient. However, you can also use multiple back ends in a single Shared File Systems service installation. Important Currently, Red Hat OpenStack Platform (RHOSP) does not support multiple instances of the same back end to a Shared File Systems service deployment. For example, you cannot add two Ceph Storage clusters as back ends within the same deployment. The back ends that you choose must be heterogeneous. CephFS native and CephFS through NFS are considered one back end with different protocols. The scheduler for the Shared File Systems service determines the destination back end for share creation requests. A single back end in the Shared File Systems service can expose multiple storage pools. When you configure multiple back ends, the scheduler chooses one storage pool to create a resource from all the pools exposed by all configured back ends. This is abstracted from the end user. End users see only the capabilities that are exposed by the cloud administrator. 6.1.2. Deploying multiple back ends By default, a standard Shared File Systems deployment environment file has a single back end. Use the following example procedure to add multiple back ends to the Shared File Systems service (manila) and deploy an environment with a CephFS-Ganesha and a NetApp back end. Prerequisites At least two back ends. If a back end requires a custom container, you must use one from the Red Hat Ecosystem Catalog instead of the standard Shared File Systems service container. For example, if you want to use a Dell EMC Unity storage system back end with Ceph, choose the Dell EMC Unity container from the catalog. Procedure Create a storage customization YAML file. You can use this file to provide any values or overrides that suit your environment: Configure the storage customization YAML file to include any overrides, including enabling multiple back ends: Specify the back-end templates by using the openstack overcloud deploy command. The example configuration enables the Shared File Systems service with a NetApp back end and a CephFS through NFS back end. Note Execute source ~/stackrc before issuing the openstack overcloud deploy command. Additional resources For more information about the ManilaEnabledShareProtocols parameter, see Section 6.1.4, "Overriding allowed NAS protocols" . For more information about the deployment command, see Director Installation and Usage . 6.1.3. Confirming deployment of multiple back ends Use the manila service-list command to verify that your back ends deployed successfully. If you use a health check on multiple back ends, a ping test returns a response even if one of the back ends is unresponsive, so this is not a reliable way to verify your deployment. Procedure Log in to the undercloud host as the stack user. Source the overcloudrc credentials file: Confirm the list of Shared File Systems service back ends: The status of each successfully deployed back end shows enabled and the state shows up . 6.1.4. Overriding allowed NAS protocols The Shared File Systems service can export shares in one of many network attached storage (NAS) protocols, such as NFS, CIFS, or CEPHFS. By default, the Shared File Systems service enables all of the NAS protocols supported by the back ends in a deployment. As a Red Hat OpenStack Platform (RHOSP) administrator, you can override the ManilaEnabledShareProtocols parameter and list only the protocols that you want to allow in your cloud. For example, if back ends in your deployment support both NFS and CIFS, you can override the default value and enable only one protocol. Procedure Log in to the undercloud host as the stack user. Source the overcloudrc credentials file: Create a storage customization YAML file. This file can be used to provide any values or overrides that suit your environment: Configure the ManilaEnabledShareProtocols parameter with the values that you want: Include the environment file that contains your new content in the openstack overcloud deploy command by using the -e option. Ensure that you include all other environment files that are relevant to your deployment. Note The deployment does not validate the settings. The NAS protocols that you assign must be supported by the back ends in your Shared File Systems service deployment. 6.1.5. Viewing back-end capabilities The scheduler component of the Shared File Systems service (manila) makes intelligent placement decisions based on several factors such as capacity, provisioning configuration, placement hints, and the capabilities that the back-end storage system driver detects and exposes. Procedure Run the following command to view the available capabilities: Related information To influence placement decisions, as an administrator, you can use share types and extra specifications. For more information about share types, see Creating a share type . 6.2. Creating a share type Cloud administrators can create share types to define the type of service that the Shared File Systems service (manila) scheduler uses to make scheduling decisions and that drivers use to control share creation. Share types include a description and extra specifications, for example, driver_handles_share_servers and snapshot_support , to filter back ends. Red Hat OpenStack Platform (RHOSP) director configures the Shared File Systems service with a default share type named default , but director does not create the share type. Cloud users require at least one share type to use the Shared File Systems service, and users can only create shares that match the available share types. By default, share types are public, which means they are available to all cloud projects. However, you can create private share types for use within specific projects. In the following example procedure, you use the driver_handles_share_servers parameter (DHSS), which you can set to true or false : For CephFS-NFS and native CephFS, you set DHSS to false . For other back ends, you can set DHSS to true or false . You set the DHSS value to match the value of the Manila<backend>DriverHandlesShareServers parameter in your storage customization environment file. For example, if you use a NetApp back end, this parameter is ManilaNetappDriverHandlesShareServers . Procedure After you deploy the overcloud, run the following command to create a share type: Replace <driver_handles_share_servers> with true or false . Add specifications to the default share type or create additional share types to use with different back ends. In this example, configure the default share type to select a CephFS back end and an additional share type that uses a NetApp driver_handles_share_servers=true back end: Additional resources For more information about how to make private share types or set additional share-type options, see the Security and Hardening Guide . 6.2.1. Common capabilities of share types Share types define the common capabilities of shares. Review the common capabilities of share types to understand what you can do with your shares. Table 6.1. Capabilities of share types Capability Values Description driver_handles_share_servers true or false Grants permission to use share networks to create shares. snapshot_support true or false Grants permission to create snapshots of shares. create_share_from_snapshot_support true or false Grants permission to create clones of share snapshots. revert_to_snapshot_support true or false Grants permission to revert your shares to the most recent snapshot. mount_snapshot_support true or false Grants permission to export and mount your snapshots. replication_type dr Grants permission to create replicas for disaster recovery. Only one active export is allowed at a time. readable Grants permission to create read-only replicas. Only one writable, active export is allowed at a time. writable Grants permission to create read/write replicas. Any number of active exports are allowed at a time per share. availability_zones a list of one or more availability zones Grants permission to create shares only on the availability zones listed. 6.2.2. Discovering share types As a cloud user, you must specify a share type when you create a share. Procedure Discover the available share types: The command output lists the name and ID of the share type. 6.3. Creating a share Create a share to read and write data. To create a share, use a command similar to the following: Replace the following values: sharetype applies settings associated with the specified share type. Optional: if not supplied, the default share type is used. sharename is the name of the share. Optional: shares are not required to have a name, nor is the name guaranteed to be unique. proto is the share protocol you want to use. For CephFS with NFS, proto is nfs . For CephFS native, proto is cephfs . For NetApp and Dell EMC storage back ends, proto is nfs or cifs . GB is the size of the share in gigabytes. For example, in Section 6.2, "Creating a share type" , the cloud administrator created a default share type that selects a CephFS back end and another share type named netapp that selects a NetApp back end. Procedure Use the example share types to create a 10 GB NFS share named share-01 on the CephFS NFS back end. This example uses CephFS with NFS: Optional: Create a 20 GB NFS share named share-02 on the NetApp back end: 6.3.1. Listing shares and exporting information To verify that you successfully created the shares, complete the following steps. Procedure List the shares: View the export locations of the share: View the parameters for the share: Note You use the export location to mount the share in Section 6.9.2, "Mounting the share" . 6.4. Creating a snapshot of data on a shared file system A snapshot is a read-only, point-in-time copy of data on a share. You can use a snapshot to recover data lost through accidental data deletion or file system corruption. Snapshots are more space efficient than backups, and they do not impact the performance of the Shared File Systems service (manila). Prerequisites The snapshot_support parameter must equal true on the parent share. You can run the following command to verify: Procedure As a cloud user, create a snapshot of a share: Replace <share> with the name or ID of the share for which you want to create a snapshot. Optional: Replace <snapshot_name> with the name of the snapshot. Example output Confirm that you created the snapshot: Replace <share> with the name or ID of the share from which you created the snapshot. 6.5. Creating a share from a snapshot You can create a share from a snapshot. If the parent share the snapshot was created from has a share type with driver_handles_share_servers set to true , the new share is created on the same share network as the parent. Note If the share type of the parent share has driver_handles_share_servers set to true , you cannot change the share network for the share you create from the snapshot. Prerequisites The create_share_from_snapshot_support share attribute is set to true . For more information about share types, see Common capabilities of share types . The status attribute of the snapshot is set to available . Procedure Retrieve the ID of the share snapshot that contains the data that you require for your new share: A share created from a snapshot can be larger, but not smaller, than the snapshot. Retrieve the size of the snapshot: Create a share from a snapshot: Replace <share_protocol> with the protocol, such as NFS. Replace <size> with the size of the share to be created, in GiB. Replace <snapshot_id> with the ID of the snapshot. Replace <name> with the name of the new share. List the shares to confirm that the share was created successfully: View the properties of the new share: Verification After you create a snapshot, confirm that the snapshot is available. List the snapshots to confirm that they are available: 6.6. Deleting a snapshot When you create a snapshot of a share, you cannot delete the share until you delete all of the snapshots created from that share. Procedure Identify the snapshot you want to delete and retrieve its ID: Delete the snapshot: Note Repeat this step for each snapshot that you want to delete. After you delete the snapshot, run the following command to confirm that you deleted the snapshot: 6.7. Networking for shared file systems Shared file systems are accessed over a network. It is important to plan the networking on your cloud to ensure that end user clients can connect their shares to workloads that run on Red Hat OpenStack Platform (RHOSP) virtual machines, bare-metal servers, and containers. Depending on the level of security and isolation required for end users, as an administrator, you can set the driver_handles_share_servers parameter to true or false. If you set the driver_handles_share_servers parameter to true, this enables the service to export shares to end user-defined share networks with the help of isolated share servers. When the driver_handles_share_servers parameter equals true, users can provision their workloads on self-service share networks. This ensures that their shares are exported by completely isolated NAS file servers on dedicated network segments. The share networks used by end users can be the same as the private project networks that they can create. As an administrator, you must ensure that the physical network to which you map these isolated networks extends to your storage infrastructure. You must also ensure that the network segmentation style by project networks is supported by the storage system used. Storage systems, such as NetApp ONTAP and Dell EMC PowerMax, Unity, and VNX, do not support virtual overlay segmentation styles such as GENEVE or VXLAN. As an alternative, you can terminate the overlay networking at top-of-rack switches and use a more primitive form of networking for your project networks, such as VLAN. Another alternative is to allow VLAN segments on shared provider networks or provide access to a pre-existing segmented network that is already connected to your storage system. If you set the driver_handles_share_servers parameter to false, users cannot create shares on their own share networks. Instead, they must connect their clients to the network configured by the cloud administrator. When the driver_handles_share_servers parameter equals false, director can create a dedicated shared storage network for you. For example, when you deploy the native CephFS back end with standard director templates, director creates a shared provider network called Storage . When you deploy CephFS through the NFS back end, the shared provider network is called StorageNFS . Your end users must connect their clients to the shared storage network to access their shares. Not all shared file system storage drivers support both modes of operation. Regardless of which mode you choose, the service ensures hard data path multi-tenancy isolation guarantees. If you want to offer hard network path multi-tenancy isolation guarantees to tenant workloads as part of a self-service model, you must deploy with back ends that support the driver_handles_share_servers driver mode. For information about network connectivity to the share, see Section 6.7.1, "Ensuring network connectivity to the share" . 6.7.1. Ensuring network connectivity to the share Clients that need to connect to a file share must have network connectivity to one or more of the export locations for that share. There are many ways to configure networking with the Shared File Systems service, including using network plugins. When the driver_handles_share_servers parameter for a share type equals true, a cloud user can create a share network with the details of a network to which the compute instance attaches and then reference it when creating shares. When the driver_handles_share_servers parameter for a share type equals false, a cloud user must connect their compute instance to the shared storage network. For more information about how to configure and validate network connectivity to a shared network, see Section 6.7.2, "Connecting to a shared network to access shares" . 6.7.2. Connecting to a shared network to access shares When the driver_handles_share_servers parameter equals false, shares are exported to the shared provider network that the administrator made available. As an end user, you must connect your client, such as a Compute instance, to the shared provider network to access your shares. In this example procedure, the shared provider network is called StorageNFS. StorageNFS is configured when director deploys the Shared File Systems service with the CephFS through NFS back end. Follow similar steps to connect to the network made available by your cloud administrator. Note In the example procedure, the IP address family version of the client is not important. The steps in this procedure use IPv4 addressing, but the steps are identical for IPv6. Procedure Create a security group for the StorageNFS port that allows packets to egress the port, but which does not allow ingress packets from unestablished connections: Create a port on the StorageNFS network with security enforced by the no-ingress security group. Note StorageNFSSubnet assigned IP address 172.17.5.160 to nfs-port0 . Add nfs-port0 to a Compute instance. In addition to its private and floating addresses, the Compute instance is assigned a port with the IP address 172.17.5.160 on the StorageNFS network that you can use to mount NFS shares when access is granted to that address for the share in question. Note You might need to adjust the networking configuration on the Compute instance and restart the services for the Compute instance to activate an interface with this address. 6.7.3. Configuring an IPv6 interface between the network and an instance When the shared network to which shares are exported uses IPv6 addressing, you might experience an issue with DHCPv6 on the secondary interface. If this issue occurs, configure an IPv6 interface manually on the instance. Prerequisites Connection to a shared network to access shares Procedure Log in to the instance. Configure the IPv6 interface address: Activate the interface: Ping the IPv6 address in the export location of the share to test interface connectivity: Alternatively, verify that you can reach the NFS server through Telnet: 6.8. Grant share access Before you can mount a share on a client, such as a compute instance, you must grant the client access to the share by using a command similar to the following: Replace the following values: share - the share name or ID of the share created in Section 6.3, "Creating a share" . accesstype - the type of access to be requested on the share. Some types include: user : use to authenticate by user or group name. ip : use to authenticate an instance through its IP address. cephx : use to authenticate by native CephFS client user name. Note The type of access depends on the protocol of the share. For NFS shares, only ip access type is allowed. For CIFS, user access type is appropriate. For native CephFS shares, you must use cephx . accesslevel - optional, default is rw rw : read-write access to shares. ro : read-only access to shares. clientidentifier - varies depending on accesstype . Use an IP address for ip accesstype . Use CIFS user or group for user accesstype . Use a user name string for cephx accesstype . 6.8.1. Granting access to a share You must grant end user clients access to the share so that users can read data from and write data to the share. Use this procedure to grant a client compute instance access to an NFS share through the IP address of the instance. The user rules for CIFS shares and cephx rules for CephFS shares follow a similar pattern. With user and cephx access types, you can use the same clientidentifier across multiple clients, if desired. Note In the example procedure, the IP address family version of the client is not important. The steps in this procedure use IPv4 addressing, but the steps are identical for IPv6. Procedure Retrieve the IP address of the client compute instance where you plan to mount the share. Make sure that you pick the IP address that corresponds to the network that can reach the shares. In this example, it is the IP address of the StorageNFS network: Note Access to the share has its own ID ( accessid ). Verify that the access configuration was successful: 6.8.2. Revoking access to a share The owner of a share can revoke access to the share for any reason. Complete the following steps to revoke previously-granted access to a share. Procedure Revoke access to a share: Replace <share> with either the share name or the share ID, for example: Note If you have an existing client that has read-write permissions, you must revoke their access to a share and add a read-only rule if you want the client to have read-only permissions. 6.9. Mount share on compute instances After you grant access to clients, shares can be mounted and used by them. Any type of client can access shares as long as there is network connectivity to the client. The steps used to mount an NFS share on a virtual compute instance are similar to the steps to mount an NFS share on a bare metal compute instance. For more information about how to mount shares on OpenShift containers, see Product Documentation for OpenShift Container Platform . Note Client packages for the different protocols must be installed on the Compute instance that mounts the shares. For example, for the Shared File Systems service with CephFS through NFS, the NFS client packages must support NFS 4.1. 6.9.1. Listing shares export locations Retrieve the export locations of shares so that you can mount a share. Procedure Retrieve the export location of a share: When multiple export locations exist, choose one for which the value of the preferred metadata field equals True. If no preferred locations exist, you can use any export location. 6.9.2. Mounting the share Mount a share on the client to enable access to data. For information about creating and granting share access, see the following procedures: Section 6.3, "Creating a share" Section 6.8, "Grant share access" Procedure Log in to the instance and run the following command: Mount the share on an IPv4 network by using the export location: 6.10. Deleting a share The Shared File Systems service (manila) provides no protections to prevent you from deleting your data. The Shared File Systems service does not check whether clients are connected or workloads are running. When you delete a share, you cannot retrieve it. Warning Back up your data before you delete a share. Prerequisites If you created snapshots from a share, you must delete all of the snapshots and replicas before you can delete the share. For more information, see Deleting a snapshot . Procedure Delete a share: Replace <share> with either the share name or the share ID. 6.11. Changing the default quotas in the Shared File Systems service To prevent system capacities from being exhausted without notification, cloud administrators can configure quotas. Quotas are operational limits. The Shared File Systems service (manila) enforces some sensible limits by default. These limits are called default quotas. Cloud administrators can override default quotas so that individual projects have different consumption limits. 6.11.1. Listing resource limits of the Shared File Systems service As a cloud user, you can list the current resource limits. This can help you plan workloads and prepare for any action based on your resource consumption. Procedure List the resource limits and current resource consumption for the project: 6.11.2. Updating quotas for projects, users, and share types As a cloud administrator, you can list the quotas for a project or user by using the manila quota-show command. You can update quotas for all users in a project, or a specific project user, or a share type used by the project users. You can update the following quotas for the target you choose: shares : Number of shares that you can create. snapshots : Number of snapshots that you can create. gigabytes : Total size in GB that you can allocate for all shares. snapshot-gigabytes : Total size in GB that you can allocate for all snapshots of shares. share-networks : Total number of share networks that you can create. share_groups : Total number of share groups that you can create. share_group_snapshots : Total number of share group snapshots that you can create. share-replicas : Total number of share replicas that you can create. replica-gigabytes : Total size in GB that you can allocate across all share replicas. Note You can only specify share-type quotas at the project level. You cannot set share-type quotas for specific project users. Important In the following procedures, enter the values carefully. The Shared File Systems service does not detect or report incorrect values. Procedure You can use the following commands to view quotas. If you include the --user option, you can view the quota for a specific user in the specified project. If you omit the --user option, you can view the quotas that apply to all users for the specified project. Similarly, if you include the optional --share-type , you can view the quota for a specific share type as it relates to the project. The --user and --share-type options are mutually exclusive. Example for a project: Example for a project user: Example for a project for a specific share type: Use the manila quota-update command to update the quotas. You can update quotas for all project users, a specific project user, or a share type in a project: Update quotas for all users in a project: Replace <id> with the project ID. Update quotas for a specific user in a project: Replace <id> with the project ID. This value must be the project ID, not the project name. Replace <user_id> with the user ID. The value must be the user ID, not the user name. Update quotas for all users who use a specific share type: Replace <id> with the project ID. This value must be the project ID, not the project name. Replace <share_type> with the name or ID of the share type that you want to apply the quota to. Verification The quota-update command does not produce any output. Use the quota-show command to verify that a quota was successfully updated. 6.11.3. Resetting quotas in the Shared File Systems service You can remove quota overrides to return quotas to their default values. The target entity is restricted by the default quota that applies to all projects with no overrides. Procedure Use the manila quota-delete command to return quotas to default values. You can return quotas to default values for all project users, a specific project user, or a share type in a project: Reset project quotas: Replace <id> with the project ID. This value must be the project ID, not the project name. Reset quotas for a specific user: Replace <id> with the project ID. This value must be the project ID, not the project name. Replace <user_id> with the user ID. The value must be the user ID, not the user name. Reset quotas for a share type used by project users: Replace <id> with the project ID. This value must be the project ID, not the project name. Replace <share_type> with the name or ID of the share type the quota must be reset for. Verification The quota-delete command does not produce any output. Use the quota-show command to verify whether a quota was successfully reset. List the default quotas for all projects. Default quotas apply to projects that have no overrides. 6.11.4. Updating default quotas of the Shared File Systems service As a cloud administrator, you can update default quotas that apply to all projects that do not already have quota overrides. Procedure View the usage statement of the manila quota-class-update command: Note The parameter <class_name> is a positional argument. It identifies the quota class for which the quotas are set. Set the value of this parameter to default . No other quota classes are supported. You can update the values for any of the following optional parameters: --shares <shares> Adds a new value for the shares quota. --snapshots <snapshots> Adds a new value for the snapshots quota. --gigabytes <gigabytes> Adds a new value for the gigabytes quota. --snapshot-gigabytes <snapshot_gigabytes> or --snapshot_gigabytes <snapshot_gigabytes> Adds a new value for the snapshot_gigabytes quota. --share-networks <share_networks> or --share_networks <share_networks> Adds a new value for the share_networks quota. --share-replicas <share_replicas> , --share_replicas <share_replicas> , or --replicas <share_replicas> Adds a new value for the share_replicas quota. --replica-gigabytes <replica_gigabytes> or --replica_gigabytes <replica_gigabytes> Adds a new value for the replica_gigabytes quota. Use the information from the usage statement to update the default quotas. The following example updates the default quotas for shares and gigabytes : 6.12. Troubleshooting failures In the event that Shared File Systems (manila) operations, such as create share or create share group, fail asynchronously, as an end user, you can run queries from the command line for more information about the errors. 6.12.1. Fixing create share or create share group failures In this example, the goal of the end user is to create a share to host software libraries on several virtual machines. The example deliberately introduces two share creation failures to illustrate how to use the command line to retrieve user support messages. Procedure To create the share, you can use a share type that specifies some capabilities that you want the share to have. Cloud administrators can create share types. View the available share types: In this example, two share types are available. To use a share type that specifies the driver_handles_share_servers=True capability, you must create a share network on which to export the share. Create a share network from a private project network. Create the share: View the status of the share: In this example, an error occurred during the share creation. To view the user support message, run the message-list command. Use the --resource-id to filter to the specific share you want to find out about. In the User Message column, notice that the Shared File Systems service failed to create the share because of a capabilities mismatch. To view more message information, run the message-show command, followed by the ID of the message from the message-list command: As the cloud user, you can check capabilities through the share type so you can review the share types available. The difference between the two share types is the value of driver_handles_share_servers : Create a share with the other available share type: In this example, the second share creation attempt fails. View the user support message: The service does not expect a share network for the share type that you used. Without consulting the administrator, you can discover that the administrator has not made available a storage back end that supports exporting shares directly on to your private neutron network. Create the share without the share-network parameter: Ensure that the share was created successfully: Delete the shares and support messages: 6.12.2. Debugging share mounting failures If you experience an issue when you mount shares, use these verification steps to identify the root cause. Procedure Verify the access control list of the share to ensure that the rule that corresponds to your client is correct and has been successfully applied. In a successful rule, the state attribute equals active . If the share type parameter is configured to driver_handles_share_servers=False , copy the hostname or IP address from the export location and ping it to confirm connectivity to the NAS server: Note The IP address is written in universal address format (uaddr), which adds two extra octets (8.1) to represent the NFS service port, 2049. If these verification steps fail, there might be a network connectivity issue, or your shared file system back-end storage has failed. Collect the logs and contact Red Hat Support. | [
"vi /home/stack/templates/storage_customizations.yaml",
"parameter_defaults: ManilaEnabledShareProtocols: - NFS ManilaNetappLogin: '<login_name>' ManilaNetappPassword: '<password>' ManilaNetappServerHostname: '<netapp-hostname>' ManilaNetappVserver: '<netapp-vserver>' ManilaNetappDriverHandlesShareServers: 'false'",
"[stack@undercloud ~]USD source ~/stackrc openstack overcloud deploy --timeout 100 --stack overcloud --templates /usr/share/openstack-tripleo-heat-templates -n /usr/share/openstack-tripleo-heat-templates/network_data_ganesha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -r /home/stack/templates/roles/roles_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-netapp-config.yaml -e /home/stack/templates/storage_customizations.yaml",
"source ~/overcloudrc",
"manila service-list +----+--------+--------+------+---------+-------+----------------------------+ | Id | Binary | Host | Zone | Status | State | Updated_at | +----+--------+--------+------+---------+-------+----------------------------+ | 2 | manila-scheduler | hostgroup | nova | enabled | up | 2021-03-24T16:49:09.000000 | | 5 | manila-share | hostgroup@cephfs | nova | enabled | up | 2021-03-24T16:49:12.000000 | | 8 | manila-share | hostgroup@tripleo_netapp | nova | enabled | up | 2021-03-24T16:49:06.000000 |",
"source ~/overcloudrc",
"vi /home/stack/templates/storage_customizations.yaml",
"parameter_defaults: ManilaEnabledShareProtocols: - NFS - CEPHFS",
"openstack overcloud deploy -e /home/stack/templates/storage_customizations.yaml",
"manila pool-list --detail +------------------------------------+----------------------------+ | Property | Value | +------------------------------------+----------------------------+ | name | hostgroup@cephfs#cephfs | | pool_name | cephfs | | total_capacity_gb | 1978 | | free_capacity_gb | 1812 | | driver_handles_share_servers | False | | snapshot_support | True | | create_share_from_snapshot_support | False | | revert_to_snapshot_support | False | | mount_snapshot_support | False | +------------------------------------+----------------------------+ +------------------------------------+-----------------------------------+ | Property | Value | +------------------------------------+-----------------------------------+ | name | hostgroup@tripleo_netapp#aggr1_n1 | | pool_name | aggr1_n1 | | total_capacity_gb | 6342.1 | | free_capacity_gb | 6161.99 | | driver_handles_share_servers | False | | mount_snapshot_support | False | | replication_type | None | | replication_domain | None | | sg_consistent_snapshot_support | host | | ipv4_support | True | | ipv6_support | False | +------------------------------------+-----------------------------------+ +------------------------------------+-----------------------------------+ | Property | Value | +------------------------------------+-----------------------------------+ | name | hostgroup@tripleo_netapp#aggr1_n2 | | pool_name | aggr1_n2 | | total_capacity_gb | 6342.1 | | free_capacity_gb | 6209.26 | | snapshot_support | True | | create_share_from_snapshot_support | True | | revert_to_snapshot_support | True | | driver_handles_share_servers | False | | mount_snapshot_support | False | | replication_type | None | | replication_domain | None | | sg_consistent_snapshot_support | host | | ipv4_support | True | | ipv6_support | False | +------------------------------------+-----------------------------------+",
"manila type-create default <driver_handles_share_servers>",
"manila type-create default false --extra-specs share_backend_name='cephfs' manila type-create netapp true --extra-specs share_backend_name='tripleo_netapp'",
"manila type-list",
"manila create [--share-type <sharetype>] [--name <sharename>] proto GB",
"(user) [stack@undercloud-0 ~]USD manila create --name share-01 nfs 10",
"(user) [stack@undercloud-0 ~]USD manila create --name share-02 --share-type netapp --share-network mynet nfs 20",
"(user) [stack@undercloud-0 ~]USD manila list +--------------------------------------+----------+-----+-----------+ | ID | Name | ... | Status +--------------------------------------+----------+-----+-----------+ | 8c3bedd8-bc82-4100-a65d-53ec51b5fe81 | share-01 | ... | available +--------------------------------------+----------+-----+-----------+",
"(user) [stack@undercloud-0 ~]USD manila share-export-location-list share-01 +------------------------------------------------------------------ | Path | 172.17.5.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01 +------------------------------------------------------------------",
"manila share-export-location-show <id>",
"manila show | grep snapshot_support",
"manila snapshot-create [--name <snapshot_name>] <share>",
"+-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | id | dbdcb91b-82ba-407e-a23d-44ffca4da04c | | share_id | ee7059aa-5887-4b87-b03e-d4f0c27ed735 | | share_size | 1 | | created_at | 2022-01-07T14:20:55.541084 | | status | creating | | name | snapshot_name | | description | None | | size | 1 | | share_proto | NFS | | provider_location | None | | user_id | 6d414c62237841dcbe63d3707c1cdd90 | | project_id | 041ff9e24eba469491d770ad8666682d | +-------------------+--------------------------------------+",
"manila snapshot-list --share-id <share>",
"manila snapshot-list",
"manila snapshot-show <snapshot-id>",
"manila create <share_protocol> <size> --snapshot-id <snapshot_id> --name <name>",
"manila list",
"manila show <name>",
"manila snapshot-list",
"manila snapshot-list",
"manila snapshot-delete <snapshot>",
"manila snapshot-list",
"(user) [stack@undercloud-0 ~]USD openstack security group create no-ingress -f yaml created_at: '2018-09-19T08:19:58Z' description: no-ingress id: 66f67c24-cd8b-45e2-b60f-9eaedc79e3c5 name: no-ingress project_id: 1e021e8b322a40968484e1af538b8b63 revision_number: 2 rules: 'created_at=''2018-09-19T08:19:58Z'', direction=''egress'', ethertype=''IPv4'', id=''6c7f643f-3715-4df5-9fef-0850fb6eaaf2'', updated_at=''2018-09-19T08:19:58Z'' created_at=''2018-09-19T08:19:58Z'', direction=''egress'', ethertype=''IPv6'', id=''a8ca1ac2-fbe5-40e9-ab67-3e55b7a8632a'', updated_at=''2018-09-19T08:19:58Z''' updated_at: '2018-09-19T08:19:58Z'",
"(user) [stack@undercloud-0 ~]USD openstack port create nfs-port0 --network StorageNFS --security-group no-ingress -f yaml admin_state_up: UP allowed_address_pairs: '' binding_host_id: null binding_profile: null binding_vif_details: null binding_vif_type: null binding_vnic_type: normal created_at: '2018-09-19T08:03:02Z' data_plane_status: null description: '' device_id: '' device_owner: '' dns_assignment: null dns_name: null extra_dhcp_opts: '' fixed_ips: ip_address='172.17.5.160', subnet_id='7bc188ae-aab3-425b-a894-863e4b664192' id: 7a91cbbc-8821-4d20-a24c-99c07178e5f7 ip_address: null mac_address: fa:16:3e:be:41:6f name: nfs-port0 network_id: cb2cbc5f-ea92-4c2d-beb8-d9b10e10efae option_name: null option_value: null port_security_enabled: true project_id: 1e021e8b322a40968484e1af538b8b63 qos_policy_id: null revision_number: 6 security_group_ids: 66f67c24-cd8b-45e2-b60f-9eaedc79e3c5 status: DOWN subnet_id: null tags: '' trunk_details: null updated_at: '2018-09-19T08:03:03Z'",
"(user) [stack@undercloud-0 ~]USD openstack server add port instance0 nfs-port0 (user) [stack@undercloud-0 ~]USD openstack server list -f yaml - Flavor: m1.micro ID: 0b878c11-e791-434b-ab63-274ecfc957e8 Image: manila-test Name: demo-instance0 Networks: demo-network=172.20.0.4, 10.0.0.53; StorageNFS=172.17.5.160 Status: ACTIVE",
"sudo ip address add fd00:fd00:fd00:7000::c/64 dev eth1",
"sudo ip link set dev eth1 up",
"ping -6 fd00:fd00:fd00:7000::21",
"sudo dnf install -y telnet telnet fd00:fd00:fd00:7000::21 2049",
"manila access-allow <share> <accesstype> --access-level <accesslevel> <clientidentifier>",
"(user) [stack@undercloud-0 ~]USD openstack server list -f yaml - Flavor: m1.micro ID: 0b878c11-e791-434b-ab63-274ecfc957e8 Image: manila-test Name: demo-instance0 Networks: demo-network=172.20.0.4, 10.0.0.53; StorageNFS=172.17.5.160 Status: ACTIVE (user) [stack@undercloud-0 ~]USD manila access-allow share-01 ip 172.17.5.160",
"+-----------------+---------------------------------------+ | Property | Value | +-----------------+---------------------------------------+ | access_key | None | share_id | db3bedd8-bc82-4100-a65d-53ec51b5cba3 | created_at | 2018-09-17T21:57:42.000000 | updated_at | None | access_type | ip | access_to | 172.17.5.160 | access_level | rw | state | queued_to_apply | id | 875c6251-c17e-4c45-8516-fe0928004fff +-----------------+---------------------------------------+",
"(user) [stack@undercloud-0 ~]USD manila access-list share-01 +--------------+-------------+--------------+--------------+--------+ | id | access_type | access_to | access_level | state | +--------------+-------------+--------------+--------------+--------+ | 875c6251-... | ip | 172.17.5.160 | rw | active | +--------------+------------+--------------+--------------+---------+",
"manila access-deny <share> <accessid>",
"(user) [stack@undercloud-0 ~]USD manila access-list share-01 +--------------+-------------+--------------+--------------+--------+ | id | access_type | access_to | access_level | state | +--------------+-------------+--------------+--------------+--------+ | 875c6251-... | ip | 172.17.5.160 | rw | active | +--------------+-------------+--------------+--------------+--------+ (user) [stack@undercloud-0 ~]USD manila access-deny share-01 875c6251-c17e-4c45-8516-fe0928004fff (user) [stack@undercloud-0 ~]USD manila access-list share-01 +--------------+------------+--------------+--------------+--------+ | id | access_type| access_to | access_level | state | +--------------+------------+--------------+--------------+--------+ +--------------+------------+--------------+--------------+--------+",
"(user) [stack@undercloud-0 ~]USD manila share-export-location-list share-01",
"(user) [stack@undercloud-0 ~]USD openstack server ssh demo-instance0 --login root hostname demo-instance0",
"mount -t nfs -v 172.17.5.13:/volumes/_nogroup/e840b4ae-6a04-49ee-9d6e-67d4999fbc01 /mnt",
"manila delete <share>",
"manila absolute-limits +------------------------------+-------+ | Name | Value | +------------------------------+-------+ | maxTotalReplicaGigabytes | 1000 | | maxTotalShareGigabytes | 1000 | | maxTotalShareGroupSnapshots | 50 | | maxTotalShareGroups | 49 | | maxTotalShareNetworks | 10 | | maxTotalShareReplicas | 100 | | maxTotalShareSnapshots | 50 | | maxTotalShares | 50 | | maxTotalSnapshotGigabytes | 1000 | | totalReplicaGigabytesUsed | 22 | | totalShareGigabytesUsed | 25 | | totalShareGroupSnapshotsUsed | 0 | | totalShareGroupsUsed | 9 | | totalShareNetworksUsed | 2 | | totalShareReplicasUsed | 9 | | totalShareSnapshotsUsed | 4 | | totalSharesUsed | 12 | | totalSnapshotGigabytesUsed | 4 | +------------------------------+-------+",
"manila quota-show",
"manila quota-show --project af2838436f3f4cf6896399dd97c4c050 +-----------------------+----------------------------------+ | Property | Value | +-----------------------+----------------------------------+ | gigabytes | 1000 | | id | af2838436f3f4cf6896399dd97c4c050 | | replica_gigabytes | 1000 | | share_group_snapshots | 50 | | share_groups | 49 | | share_networks | 10 | | share_replicas | 100 | | shares | 50 | | snapshot_gigabytes | 1000 | | snapshots | 50 | +-----------------------+----------------------------------+",
"manila quota-show --project af2838436f3f4cf6896399dd97c4c050 --user 81ebb491dd0e4c2aae0775dd564e76d1 +-----------------------+----------------------------------+ | Property | Value | +-----------------------+----------------------------------+ | gigabytes | 500 | | id | af2838436f3f4cf6896399dd97c4c050 | | replica_gigabytes | 1000 | | share_group_snapshots | 50 | | share_groups | 49 | | share_networks | 10 | | share_replicas | 100 | | shares | 25 | | snapshot_gigabytes | 1000 | | snapshots | 50 | +-----------------------+----------------------------------+",
"manila quota-show --project af2838436f3f4cf6896399dd97c4c050 --share-type dhss_false +---------------------+----------------------------------+ | Property | Value | +---------------------+----------------------------------+ | gigabytes | 1000 | | id | af2838436f3f4cf6896399dd97c4c050 | | replica_gigabytes | 1000 | | share_replicas | 100 | | shares | 15 | | snapshot_gigabytes | 1000 | | snapshots | 50 | +---------------------+----------------------------------+",
"manila quota-update <id> [--shares <share_quota> --gigabytes <share_gigabytes_quota> ...]",
"manila quota-update <id> --user <user_id> [--shares <new_share_quota> --gigabytes <new_share_gigabytes_quota> ...]",
"manila quota-update <id> --share-type <share_type> [--shares <new_share_quota>30 --gigabytes <new-share_gigabytes_quota> ...]",
"manila quota-delete --project <id>",
"manila quota-delete --project <id> --user <user_id>",
"manila quota-delete --project <id> --share-type <share_type>",
"manila quota-class-show default",
"manila help quota-class-update usage: manila quota-class-update [--shares <shares>] [--snapshots <snapshots>] [--gigabytes <gigabytes>] [--snapshot-gigabytes <snapshot_gigabytes>] [--share-networks <share_networks>] [--share-replicas <share_replicas>] [--replica-gigabytes <replica_gigabytes>] <class_name>",
"manila quota-class-update default --shares 30 --gigabytes 512 manila quota-class-show default +-----------------------+---------+ | Property | Value | +-----------------------+---------+ | gigabytes | 512 | | id | default | | replica_gigabytes | 1000 | | share_group_snapshots | 50 | | share_groups | 50 | | share_networks | 10 | | share_replicas | 100 | | shares | 30 | | snapshot_gigabytes | 1000 | | snapshots | 50 | +-----------------------+---------+",
"clouduser1@client:~USD manila type-list +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ | ID | Name | visibility | is_default | required_extra_specs | optional_extra_specs | Description | +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ | 1cf5d45a-61b3-44d1-8ec7-89a21f51a4d4 | dhss_false | public | YES | driver_handles_share_servers : False | create_share_from_snapshot_support : True | None | | | | | | | mount_snapshot_support : False | | | | | | | | revert_to_snapshot_support : False | | | | | | | | snapshot_support : True | | | 277c1089-127f-426e-9b12-711845991ea1 | dhss_true | public | - | driver_handles_share_servers : True | create_share_from_snapshot_support : True | None | | | | | | | mount_snapshot_support : False | | | | | | | | revert_to_snapshot_support : False | | | | | | | | snapshot_support : True | | +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+",
"clouduser1@client:~USD openstack subnet list +--------------------------------------+---------------------+--------------------------------------+---------------------+ | ID | Name | Network | Subnet | +--------------------------------------+---------------------+--------------------------------------+---------------------+ | 78c6ac57-bba7-4922-ab81-16cde31c2d06 | private-subnet | 74d5cfb3-5dd0-43f7-b1b2-5b544cb16212 | 10.0.0.0/26 | | a344682c-718d-4825-a87a-3622b4d3a771 | ipv6-private-subnet | 74d5cfb3-5dd0-43f7-b1b2-5b544cb16212 | fd36:18fc:a8e9::/64 | +--------------------------------------+---------------------+--------------------------------------+---------------------+ clouduser1@client:~USD manila share-network-create --name mynet --neutron-net-id 74d5cfb3-5dd0-43f7-b1b2-5b544cb16212 --neutron-subnet-id 78c6ac57-bba7-4922-ab81-16cde31c2d06 +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | network_type | None | | name | mynet | | segmentation_id | None | | created_at | 2018-10-09T21:32:22.485399 | | neutron_subnet_id | 78c6ac57-bba7-4922-ab81-16cde31c2d06 | | updated_at | None | | mtu | None | | gateway | None | | neutron_net_id | 74d5cfb3-5dd0-43f7-b1b2-5b544cb16212 | | ip_version | None | | cidr | None | | project_id | cadd7139bc3148b8973df097c0911016 | | id | 0b0fc320-d4b5-44a1-a1ae-800c56de550c | | description | None | +-------------------+--------------------------------------+ clouduser1@client:~USD manila share-network-list +--------------------------------------+-------+ | id | name | +--------------------------------------+-------+ | 6c7ef9ef-3591-48b6-b18a-71a03059edd5 | mynet | +--------------------------------------+-------+",
"clouduser1@client:~USD manila create nfs 1 --name software_share --share-network mynet --share-type dhss_true +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | status | creating | | share_type_name | dhss_true | | description | None | | availability_zone | None | | share_network_id | 6c7ef9ef-3591-48b6-b18a-71a03059edd5 | | share_server_id | None | | share_group_id | None | | host | | | revert_to_snapshot_support | False | | access_rules_status | active | | snapshot_id | None | | create_share_from_snapshot_support | False | | is_public | False | | task_state | None | | snapshot_support | False | | id | 243f3a51-0624-4bdd-950e-7ed190b53b67 | | size | 1 | | source_share_group_snapshot_member_id | None | | user_id | 61aef4895b0b41619e67ae83fba6defe | | name | software_share | | share_type | 277c1089-127f-426e-9b12-711845991ea1 | | has_replicas | False | | replication_type | None | | created_at | 2018-10-09T21:12:21.000000 | | share_proto | NFS | | mount_snapshot_support | False | | project_id | cadd7139bc3148b8973df097c0911016 | | metadata | {} | +---------------------------------------+--------------------------------------+",
"clouduser1@client:~USD manila list +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ | 243f3a51-0624-4bdd-950e-7ed190b53b67 | software_share | 1 | NFS | error | False | dhss_true | | None | +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+",
"clouduser1@client:~USD manila message-list +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ID | Resource Type | Resource ID | Action ID | User Message | Detail ID | Created At | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | 7d411c3c-46d9-433f-9e21-c04ca30b209c | SHARE | 243f3a51-0624-4bdd-950e-7ed190b53b67 | 001 | allocate host: No storage could be allocated for this share request, Capabilities filter didn't succeed. | 008 | 2018-10-09T21:12:21.000000 | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+",
"clouduser1@client:~USD manila message-show 7d411c3c-46d9-433f-9e21-c04ca30b209c +---------------+----------------------------------------------------------------------------------------------------------+ | Property | Value | +---------------+----------------------------------------------------------------------------------------------------------+ | request_id | req-0a875292-6c52-458b-87d4-1f945556feac | | detail_id | 008 | | expires_at | 2018-11-08T21:12:21.000000 | | resource_id | 243f3a51-0624-4bdd-950e-7ed190b53b67 | | user_message | allocate host: No storage could be allocated for this share request, Capabilities filter didn't succeed. | | created_at | 2018-10-09T21:12:21.000000 | | message_level | ERROR | | id | 7d411c3c-46d9-433f-9e21-c04ca30b209c | | resource_type | SHARE | | action_id | 001 | +---------------+----------------------------------------------------------------------------------------------------------+",
"clouduser1@client:~USD manila type-list +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ | ID | Name | visibility | is_default | required_extra_specs | optional_extra_specs | Description | +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+ | 1cf5d45a-61b3-44d1-8ec7-89a21f51a4d4 | dhss_false | public | YES | driver_handles_share_servers : False | create_share_from_snapshot_support : True | None | | | | | | | mount_snapshot_support : False | | | | | | | | revert_to_snapshot_support : False | | | | | | | | snapshot_support : True | | | 277c1089-127f-426e-9b12-711845991ea1 | dhss_true | public | - | driver_handles_share_servers : True | create_share_from_snapshot_support : True | None | | | | | | | mount_snapshot_support : False | | | | | | | | revert_to_snapshot_support : False | | | | | | | | snapshot_support : True | | +--------------------------------------+-------------+------------+------------+--------------------------------------+--------------------------------------------+-------------+",
"clouduser1@client:~USD manila create nfs 1 --name software_share --share-network mynet --share-type dhss_false +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | status | creating | | share_type_name | dhss_false | | description | None | | availability_zone | None | | share_network_id | 6c7ef9ef-3591-48b6-b18a-71a03059edd5 | | share_group_id | None | | revert_to_snapshot_support | False | | access_rules_status | active | | snapshot_id | None | | create_share_from_snapshot_support | True | | is_public | False | | task_state | None | | snapshot_support | True | | id | 2d03d480-7cba-4122-ac9d-edc59c8df698 | | size | 1 | | source_share_group_snapshot_member_id | None | | user_id | 5c7bdb6eb0504d54a619acf8375c08ce | | name | software_share | | share_type | 1cf5d45a-61b3-44d1-8ec7-89a21f51a4d4 | | has_replicas | False | | replication_type | None | | created_at | 2018-10-09T21:24:40.000000 | | share_proto | NFS | | mount_snapshot_support | False | | project_id | cadd7139bc3148b8973df097c0911016 | | metadata | {} | +---------------------------------------+--------------------------------------+",
"clouduser1@client:~USD manila list +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ | 2d03d480-7cba-4122-ac9d-edc59c8df698 | software_share | 1 | NFS | error | False | dhss_false | | nova | | 243f3a51-0624-4bdd-950e-7ed190b53b67 | software_share | 1 | NFS | error | False | dhss_true | | None | +--------------------------------------+----------------+------+-------------+--------+-----------+-----------------+------+-------------------+ clouduser1@client:~USD manila message-list +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ID | Resource Type | Resource ID | Action ID | User Message | Detail ID | Created At | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ed7e02a2-0cdb-4ff9-b64f-e4d2ec1ef069 | SHARE | 2d03d480-7cba-4122-ac9d-edc59c8df698 | 002 | create: Driver does not expect share-network to be provided with current configuration. | 003 | 2018-10-09T21:24:40.000000 | | 7d411c3c-46d9-433f-9e21-c04ca30b209c | SHARE | 243f3a51-0624-4bdd-950e-7ed190b53b67 | 001 | allocate host: No storage could be allocated for this share request, Capabilities filter didn't succeed. | 008 | 2018-10-09T21:12:21.000000 | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+",
"clouduser1@client:~USD manila create nfs 1 --name software_share --share-type dhss_false +---------------------------------------+--------------------------------------+ | Property | Value | +---------------------------------------+--------------------------------------+ | status | creating | | share_type_name | dhss_false | | description | None | | availability_zone | None | | share_network_id | None | | share_group_id | None | | revert_to_snapshot_support | False | | access_rules_status | active | | snapshot_id | None | | create_share_from_snapshot_support | True | | is_public | False | | task_state | None | | snapshot_support | True | | id | 4d3d7fcf-5fb7-4209-90eb-9e064659f46d | | size | 1 | | source_share_group_snapshot_member_id | None | | user_id | 5c7bdb6eb0504d54a619acf8375c08ce | | name | software_share | | share_type | 1cf5d45a-61b3-44d1-8ec7-89a21f51a4d4 | | has_replicas | False | | replication_type | None | | created_at | 2018-10-09T21:25:40.000000 | | share_proto | NFS | | mount_snapshot_support | False | | project_id | cadd7139bc3148b8973df097c0911016 | | metadata | {} | +---------------------------------------+--------------------------------------+",
"clouduser1@client:~USD manila list +--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------+------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------+------+-------------------+ | 4d3d7fcf-5fb7-4209-90eb-9e064659f46d | software_share | 1 | NFS | available | False | dhss_false | | nova | | 2d03d480-7cba-4122-ac9d-edc59c8df698 | software_share | 1 | NFS | error | False | dhss_false | | nova | | 243f3a51-0624-4bdd-950e-7ed190b53b67 | software_share | 1 | NFS | error | False | dhss_true | | None | +--------------------------------------+----------------+------+-------------+-----------+-----------+-----------------+------+-------------------+",
"clouduser1@client:~USD manila message-list +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ID | Resource Type | Resource ID | Action ID | User Message | Detail ID | Created At | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ | ed7e02a2-0cdb-4ff9-b64f-e4d2ec1ef069 | SHARE | 2d03d480-7cba-4122-ac9d-edc59c8df698 | 002 | create: Driver does not expect share-network to be provided with current configuration. | 003 | 2018-10-09T21:24:40.000000 | | 7d411c3c-46d9-433f-9e21-c04ca30b209c | SHARE | 243f3a51-0624-4bdd-950e-7ed190b53b67 | 001 | allocate host: No storage could be allocated for this share request, Capabilities filter didn't succeed. | 008 | 2018-10-09T21:12:21.000000 | +--------------------------------------+---------------+--------------------------------------+-----------+----------------------------------------------------------------------------------------------------------+-----------+----------------------------+ clouduser1@client:~USD manila delete 2d03d480-7cba-4122-ac9d-edc59c8df698 243f3a51-0624-4bdd-950e-7ed190b53b67 clouduser1@client:~USD manila message-delete ed7e02a2-0cdb-4ff9-b64f-e4d2ec1ef069 7d411c3c-46d9-433f-9e21-c04ca30b209c clouduser1@client:~USD manila message-list +----+---------------+-------------+-----------+--------------+-----------+------------+ | ID | Resource Type | Resource ID | Action ID | User Message | Detail ID | Created At | +----+---------------+-------------+-----------+--------------+-----------+------------+ +----+---------------+-------------+-----------+--------------+-----------+------------+",
"manila access-list share-01",
"ping -c 1 172.17.5.13 PING 172.17.5.13 (172.17.5.13) 56(84) bytes of data. 64 bytes from 172.17.5.13: icmp_seq=1 ttl=64 time=0.048 ms--- 172.17.5.13 ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 7.851/7.851/7.851/0.000 ms If using the NFS protocol, you may verify that the NFS server is ready to respond to NFS rpcs on the proper port: rpcinfo -T tcp -a 172.17.5.13.8.1 100003 4 program 100003 version 4 ready and waiting"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/storage_guide/assembly_shared-file-systems-service_osp-storage-guide |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.