title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 328. SSH Component
Chapter 328. SSH Component Available as of Camel version 2.10 The SSH component enables access to SSH servers such that you can send an SSH command, and process the response. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ssh</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 328.1. URI format ssh:[username[:password]@]host[:port][?options] 328.2. Options The SSH component supports 15 options, which are listed below. Name Description Default Type configuration (advanced) To use the shared SSH configuration SshConfiguration host (common) Sets the hostname of the remote SSH server. String port (common) Sets the port number for the remote SSH server. int username (security) Sets the username to use in logging into the remote SSH server. String password (security) Sets the password to use in connecting to remote SSH server. Requires keyPairProvider to be set to null. String pollCommand (common) Sets the command string to send to the remote SSH server during every poll cycle. Only works with camel-ssh component being used as a consumer, i.e. from(ssh://... ). You may need to end your command with a newline, and that must be URL encoded %0A String keyPairProvider (security) Sets the KeyPairProvider reference to use when connecting using Certificates to the remote SSH Server. KeyPairProvider keyType (security) Sets the key type to pass to the KeyPairProvider as part of authentication. KeyPairProvider.loadKey(... ) will be passed this value. Defaults to ssh-rsa. String timeout (common) Sets the timeout in milliseconds to wait in establishing the remote SSH server connection. Defaults to 30000 milliseconds. long certFilename (security) Deprecated Sets the resource path of the certificate to use for Authentication. String certResource (security) Sets the resource path of the certificate to use for Authentication. Will use ResourceHelperKeyPairProvider to resolve file based certificate, and depends on keyType setting. String channelType (advanced) Sets the channel type to pass to the Channel as part of command execution. Defaults to exec. String shellPrompt (advanced) Sets the shellPrompt to be dropped when response is read after command execution String sleepForShellPrompt (advanced) Sets the sleep period in milliseconds to wait reading response from shell prompt. Defaults to 100 milliseconds. long resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The SSH endpoint is configured using URI syntax: with the following path and query parameters: 328.2.1. Path Parameters (2 parameters): Name Description Default Type host Required Sets the hostname of the remote SSH server. String port Sets the port number for the remote SSH server. 22 int 328.2.2. Query Parameters (31 parameters): Name Description Default Type failOnUnknownHost (common) Specifies whether a connection to an unknown host should fail or not. This value is only checked when the property knownHosts is set. false boolean knownHostsResource (common) Sets the resource path for a known_hosts file String timeout (common) Sets the timeout in milliseconds to wait in establishing the remote SSH server connection. Defaults to 30000 milliseconds. 30000 long bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean pollCommand (consumer) Sets the command string to send to the remote SSH server during every poll cycle. Only works with camel-ssh component being used as a consumer, i.e. from(ssh://... ) You may need to end your command with a newline, and that must be URL encoded %0A String sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy channelType (advanced) Sets the channel type to pass to the Channel as part of command execution. Defaults to exec. exec String shellPrompt (advanced) Sets the shellPrompt to be dropped when response is read after command execution String sleepForShellPrompt (advanced) Sets the sleep period in milliseconds to wait reading response from shell prompt. Defaults to 100 milliseconds. 100 long synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean certResource (security) Sets the resource path of the certificate to use for Authentication. Will use ResourceHelperKeyPairProvider to resolve file based certificate, and depends on keyType setting. String keyPairProvider (security) Sets the KeyPairProvider reference to use when connecting using Certificates to the remote SSH Server. KeyPairProvider keyType (security) Sets the key type to pass to the KeyPairProvider as part of authentication. KeyPairProvider.loadKey(... ) will be passed this value. Defaults to ssh-rsa. ssh-rsa String password (security) Sets the password to use in connecting to remote SSH server. Requires keyPairProvider to be set to null. String username (security) Sets the username to use in logging into the remote SSH server. String 328.3. Spring Boot Auto-Configuration The component supports 30 options, which are listed below. Name Description Default Type camel.component.ssh.cert-resource Sets the resource path of the certificate to use for Authentication. Will use ResourceHelperKeyPairProvider to resolve file based certificate, and depends on keyType setting. String camel.component.ssh.channel-type Sets the channel type to pass to the Channel as part of command execution. Defaults to exec. String camel.component.ssh.configuration.cert-resource Sets the resource path of the certificate to use for Authentication. Will use ResourceHelperKeyPairProvider to resolve file based certificate, and depends on keyType setting. String camel.component.ssh.configuration.channel-type Sets the channel type to pass to the Channel as part of command execution. Defaults to exec. exec String camel.component.ssh.configuration.fail-on-unknown-host Specifies whether a connection to an unknown host should fail or not. This value is only checked when the property knownHosts is set. false Boolean camel.component.ssh.configuration.host Sets the hostname of the remote SSH server. String camel.component.ssh.configuration.key-pair-provider Sets the KeyPairProvider reference to use when connecting using Certificates to the remote SSH Server. KeyPairProvider camel.component.ssh.configuration.key-type Sets the key type to pass to the KeyPairProvider as part of authentication. KeyPairProvider.loadKey(... ) will be passed this value. Defaults to ssh-rsa. ssh-rsa String camel.component.ssh.configuration.known-hosts-resource Sets the resource path for a known_hosts file String camel.component.ssh.configuration.password Sets the password to use in connecting to remote SSH server. Requires keyPairProvider to be set to null. String camel.component.ssh.configuration.poll-command Sets the command string to send to the remote SSH server during every poll cycle. Only works with camel-ssh component being used as a consumer, i.e. from(ssh://... ) You may need to end your command with a newline, and that must be URL encoded %0A String camel.component.ssh.configuration.port Sets the port number for the remote SSH server. 22 Integer camel.component.ssh.configuration.shell-prompt Sets the shellPrompt to be dropped when response is read after command execution String camel.component.ssh.configuration.sleep-for-shell-prompt Sets the sleep period in milliseconds to wait reading response from shell prompt. Defaults to 100 milliseconds. 100 Long camel.component.ssh.configuration.timeout Sets the timeout in milliseconds to wait in establishing the remote SSH server connection. Defaults to 30000 milliseconds. 30000 Long camel.component.ssh.configuration.username Sets the username to use in logging into the remote SSH server. String camel.component.ssh.enabled Enable ssh component true Boolean camel.component.ssh.host Sets the hostname of the remote SSH server. String camel.component.ssh.key-pair-provider Sets the KeyPairProvider reference to use when connecting using Certificates to the remote SSH Server. The option is a org.apache.sshd.common.keyprovider.KeyPairProvider type. String camel.component.ssh.key-type Sets the key type to pass to the KeyPairProvider as part of authentication. KeyPairProvider.loadKey(... ) will be passed this value. Defaults to ssh-rsa. String camel.component.ssh.password Sets the password to use in connecting to remote SSH server. Requires keyPairProvider to be set to null. String camel.component.ssh.poll-command Sets the command string to send to the remote SSH server during every poll cycle. Only works with camel-ssh component being used as a consumer, i.e. from(ssh://... ). You may need to end your command with a newline, and that must be URL encoded %0A String camel.component.ssh.port Sets the port number for the remote SSH server. Integer camel.component.ssh.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.ssh.shell-prompt Sets the shellPrompt to be dropped when response is read after command execution String camel.component.ssh.sleep-for-shell-prompt Sets the sleep period in milliseconds to wait reading response from shell prompt. Defaults to 100 milliseconds. Long camel.component.ssh.timeout Sets the timeout in milliseconds to wait in establishing the remote SSH server connection. Defaults to 30000 milliseconds. Long camel.component.ssh.username Sets the username to use in logging into the remote SSH server. String camel.component.ssh.cert-filename Sets the resource path of the certificate to use for Authentication. String camel.component.ssh.configuration.cert-filename @deprecated As of version 2.11, replaced by {@link #setCertResource(String)} String 328.4. Usage as a Producer endpoint When the SSH Component is used as a Producer ( .to("ssh://... ") ), it will send the message body as the command to execute on the remote SSH server. Here is an example of this within the XML DSL. Note that the command has an XML encoded newline ( &#10; ). <route id="camel-example-ssh-producer"> <from uri="direct:exampleSshProducer"/> <setBody> <constant>features:list&#10;</constant> </setBody> <to uri="ssh://karaf:karaf@localhost:8101"/> <log message="USD{body}"/> </route> 328.5. Authentication The SSH Component can authenticate against the remote SSH server using one of two mechanisms: Public Key certificate or username/password. Configuring how the SSH Component does authentication is based on how and which options are set. First, it will look to see if the certResource option has been set, and if so, use it to locate the referenced Public Key certificate and use that for authentication. If certResource is not set, it will look to see if a keyPairProvider has been set, and if so, it will use that to for certificate based authentication. If neither certResource nor keyPairProvider are set, it will use the username and password options for authentication. Even though the username and password are provided in the endpoint configuration and headers set with SshConstants.USERNAME_HEADER ( CamelSshUsername ) and SshConstants.PASSWORD_HEADER ( CamelSshPassword ), the endpoint configuration is surpassed and credentials set in the headers are used. The following route fragment shows an SSH polling consumer using a certificate from the classpath. In the XML DSL, <route> <from uri="ssh://scott@localhost:8101?certResource=classpath:test_rsa&amp;useFixedDelay=true&amp;delay=5000&amp;pollCommand=features:list%0A"/> <log message="USD{body}"/> </route> In the Java DSL, from("ssh://scott@localhost:8101?certResource=classpath:test_rsa&useFixedDelay=true&delay=5000&pollCommand=features:list%0A") .log("USD{body}"); An example of using Public Key authentication is provided in examples/camel-example-ssh-security . Certificate Dependencies You will need to add some additional runtime dependencies if you use certificate based authentication. The dependency versions shown are as of Camel 2.11, you may need to use later versions depending what version of Camel you are using. <dependency> <groupId>org.apache.sshd</groupId> <artifactId>sshd-core</artifactId> <version>0.8.0</version> </dependency> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcpg-jdk18on</artifactId> <version>1.72</version> </dependency> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcpkix-jdk18on</artifactId> <version>1.72</version> </dependency> 328.6. Example See the examples/camel-example-ssh and examples/camel-example-ssh-security in the Camel distribution. 328.7. See Also Configuring Camel Component Endpoint Getting Started
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ssh</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "ssh:[username[:password]@]host[:port][?options]", "ssh:host:port", "<route id=\"camel-example-ssh-producer\"> <from uri=\"direct:exampleSshProducer\"/> <setBody> <constant>features:list&#10;</constant> </setBody> <to uri=\"ssh://karaf:karaf@localhost:8101\"/> <log message=\"USD{body}\"/> </route>", "<route> <from uri=\"ssh://scott@localhost:8101?certResource=classpath:test_rsa&amp;useFixedDelay=true&amp;delay=5000&amp;pollCommand=features:list%0A\"/> <log message=\"USD{body}\"/> </route>", "from(\"ssh://scott@localhost:8101?certResource=classpath:test_rsa&useFixedDelay=true&delay=5000&pollCommand=features:list%0A\") .log(\"USD{body}\");", "<dependency> <groupId>org.apache.sshd</groupId> <artifactId>sshd-core</artifactId> <version>0.8.0</version> </dependency> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcpg-jdk18on</artifactId> <version>1.72</version> </dependency> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcpkix-jdk18on</artifactId> <version>1.72</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/ssh-component
Appendix C. Boot options reference
Appendix C. Boot options reference You can use the boot options to modify the default behavior of the installation program. C.1. Installation source boot options This section describes various installation source boot options. inst.repo= The inst.repo= boot option specifies the installation source, that is, the location providing the package repositories and a valid .treeinfo file that describes them. For example: inst.repo=cdrom . The target of the inst.repo= option must be one of the following installation media: an installable tree, which is a directory structure containing the installation program images, packages, and repository data as well as a valid .treeinfo file a DVD (a physical disk present in the system DVD drive) an ISO image of the full Red Hat Enterprise Linux installation DVD, placed on a disk or a network location accessible to the system. Use the inst.repo= boot option to configure different installation methods using different formats. The following table contains details of the inst.repo= boot option syntax: Table C.1. Types and format for the inst.repo= boot option and installation source Source type Boot option format Source format CD/DVD drive inst.repo=cdrom: <device> Installation DVD as a physical disk. [a] Mountable device (HDD and USB stick) inst.repo=hd: <device> :/ <path> Image file of the installation DVD. NFS Server inst.repo=nfs:[ options :] <server> :/ <path> Image file of the installation DVD, or an installation tree, which is a complete copy of the directories and files on the installation DVD. [b] HTTP Server inst.repo=http:// <host> / <path> Installation tree that is a complete copy of the directories and files on the installation DVD. HTTPS Server inst.repo=https:// <host> / <path> FTP Server inst.repo=ftp:// <username> : <password> @ <host> / <path> HMC inst.repo=hmc [a] If device is left out, installation program automatically searches for a drive containing the installation DVD. [b] The NFS Server option uses NFS protocol version 3 by default. To use a different version, add nfsvers= X to options , replacing X with the version number that you want to use. Set disk device names with the following formats: Kernel device name, for example /dev/sda1 or sdb2 File system label, for example LABEL=Flash or LABEL=RHEL8 File system UUID, for example UUID=8176c7bf-04ff-403a-a832-9557f94e61db Non-alphanumeric characters must be represented as \xNN , where NN is the hexadecimal representation of the character. For example, \x20 is a white space (" ") . inst.addrepo= Use the inst.addrepo= boot option to add an additional repository that you can use as another installation source along with the main repository ( inst.repo= ). You can use the inst.addrepo= boot option multiple times during one boot. The following table contains details of the inst.addrepo= boot option syntax. Note The REPO_NAME is the name of the repository and is required in the installation process. These repositories are only used during the installation process; they are not installed on the installed system. For more information about unified ISO, see Unified ISO. Table C.2. Installation sources and boot option format Installation source Boot option format Additional information Installable tree at a URL inst.addrepo=REPO_NAME,[http,https,ftp]:// <host> / <path> Looks for the installable tree at a given URL. Installable tree at an NFS path inst.addrepo=REPO_NAME,nfs:// <server> :/ <path> Looks for the installable tree at a given NFS path. A colon is required after the host. The installation program passes everything after nfs:// directly to the mount command instead of parsing URLs according to RFC 2224. Installable tree in the installation environment inst.addrepo=REPO_NAME,file:// <path> Looks for the installable tree at the given location in the installation environment. To use this option, the repository must be mounted before the installation program attempts to load the available software groups. The benefit of this option is that you can have multiple repositories on one bootable ISO, and you can install both the main repository and additional repositories from the ISO. The path to the additional repositories is /run/install/source/REPO_ISO_PATH . Additionally, you can mount the repository directory in the %pre section in the Kickstart file. The path must be absolute and start with / , for example inst.addrepo=REPO_NAME,file:/// <path> Disk inst.addrepo=REPO_NAME,hd: <device> : <path> Mounts the given <device> partition and installs from the ISO that is specified by the <path> . If the <path> is not specified, the installation program looks for a valid installation ISO on the <device> . This installation method requires an ISO with a valid installable tree. inst.stage2= The inst.stage2= boot option specifies the location of the installation program's runtime image. This option expects the path to a directory that contains a valid .treeinfo file and reads the runtime image location from the .treeinfo file. If the .treeinfo file is not available, the installation program attempts to load the image from images/install.img . When you do not specify the inst.stage2 option, the installation program attempts to use the location specified with the inst.repo option. Use this option when you want to manually specify the installation source in the installation program at a later time. For example, when you want to select the Content Delivery Network (CDN) as an installation source. The installation DVD and Boot ISO already contain a suitable inst.stage2 option to boot the installation program from the respective ISO. If you want to specify an installation source, use the inst.repo= option instead. Note By default, the inst.stage2= boot option is used on the installation media and is set to a specific label; for example, inst.stage2=hd:LABEL=RHEL-x-0-0-BaseOS-x86_64 . If you modify the default label of the file system that contains the runtime image, or if you use a customized procedure to boot the installation system, verify that the inst.stage2= boot option is set to the correct value. inst.noverifyssl Use the inst.noverifyssl boot option to prevent the installer from verifying SSL certificates for all HTTPS connections with the exception of additional Kickstart repositories, where --noverifyssl can be set per repository. For example, if your remote installation source is using self-signed SSL certificates, the inst.noverifyssl boot option enables the installer to complete the installation without verifying the SSL certificates. Example when specifying the source using inst.stage2= Example when specifying the source using inst.repo= inst.stage2.all Use the inst.stage2.all boot option to specify several HTTP, HTTPS, or FTP sources. You can use the inst.stage2= boot option multiple times with the inst.stage2.all option to fetch the image from the sources sequentially until one succeeds. For example: inst.dd= The inst.dd= boot option is used to perform a driver update during the installation. For more information about how to update drivers during installation, see the Updating drivers during installation . inst.repo=hmc This option eliminates the requirement of an external network setup and expands the installation options. When booting from a Binary DVD, the installation program prompts you to enter additional kernel parameters. To set the DVD as an installation source, append the inst.repo=hmc option to the kernel parameters. The installation program then enables support element (SE) and hardware management console (HMC) file access, fetches the images for stage2 from the DVD, and provides access to the packages on the DVD for software selection. Important To use the inst.repo boot option, ensure the user is configured with a minimum of Privilege Class B . For more information about the user configuration, see IBM documentation . inst.proxy= This boot option is used when performing an installation from a HTTP, HTTPS, and FTP protocol. For example: inst.nosave= Use the inst.nosave= boot option to control the installation logs and related files that are not saved to the installed system, for example input_ks , output_ks , all_ks , logs and all . You can combine multiple values separated by a comma. For example, Note The inst.nosave boot option is used for excluding files from the installed system that cannot be removed by a Kickstart %post script, such as logs and input/output Kickstart results. input_ks Disables the ability to save the input Kickstart results. output_ks Disables the ability to save the output Kickstart results generated by the installation program. all_ks Disables the ability to save the input and output Kickstart results. logs Disables the ability to save all installation logs. all Disables the ability to save all Kickstart results, and all logs. inst.multilib Use the inst.multilib boot option to set DNF's multilib_policy to all , instead of best . inst.memcheck The inst.memcheck boot option performs a check to verify that the system has enough RAM to complete the installation. If there is not enough RAM, the installation process is stopped. The system check is approximate and memory usage during installation depends on the package selection, user interface, for example graphical or text, and other parameters. inst.nomemcheck The inst.nomemcheck boot option does not perform a check to verify if the system has enough RAM to complete the installation. Any attempt to perform the installation with less than the minimum amount of memory is unsupported, and might result in the installation process failing. C.2. Network boot options If your scenario requires booting from an image over the network instead of booting from a local image, you can use the following options to customize network booting. Note Initialize the network with the dracut tool. For complete list of dracut options, see the dracut.cmdline(7) man page on your system. ip= Use the ip= boot option to configure one or more network interfaces. To configure multiple interfaces, use one of the following methods; use the ip option multiple times, once for each interface; to do so, use the rd.neednet=1 option, and specify a primary boot interface using the bootdev option. use the ip option once, and then use Kickstart to set up further interfaces. This option accepts several different formats. The following tables contain information about the most common options. In the following tables: The ip parameter specifies the client IP address and IPv6 requires square brackets, for example 192.0.2.1 or [2001:db8::99]. The gateway parameter is the default gateway. IPv6 requires square brackets. The netmask parameter is the netmask to be used. This can be either a full netmask (for example, 255.255.255.0) or a prefix (for example, 64). The hostname parameter is the host name of the client system. This parameter is optional. Table C.3. Boot option formats to configure the network interface Boot option format Configuration method ip= method Automatic configuration of any interface ip= interface:method Automatic configuration of a specific interface ip= ip::gateway:netmask:hostname:interface :none Static configuration, for example, IPv4: ip=192.0.2.1::192.0.2.254:255.255.255.0:server.example.com:enp1s0:none IPv6: ip=[2001:db8::1]::[2001:db8::fffe]:64:server.example.com:enp1s0:none ip= ip::gateway:netmask:hostname:interface:method:mtu Automatic configuration of a specific interface with an override Configuration methods for the automatic interface The method automatic configuration of a specific interface with an override opens the interface using the specified method of automatic configuration, such as dhcp , but overrides the automatically obtained IP address, gateway, netmask, host name or other specified parameters. All parameters are optional, so specify only the parameters that you want to override. The method parameter can be any of the following: DHCP dhcp IPv6 DHCP dhcp6 IPv6 automatic configuration auto6 iSCSI Boot Firmware Table (iBFT) ibft Note If you use a boot option that requires network access, such as inst.ks=http://host/path , without specifying the ip option, the default value of the ip option is ip=dhcp .. To connect to an iSCSI target automatically, activate a network device for accessing the target by using the ip=ibft boot option. nameserver= The nameserver= option specifies the address of the name server. You can use this option multiple times. Note The ip= parameter requires square brackets. However, an IPv6 address does not work with square brackets. An example of the correct syntax to use for an IPv6 address is nameserver=2001:db8::1 . bootdev= The bootdev= option specifies the boot interface. This option is mandatory if you use more than one ip option. ifname= The ifname= options assigns an interface name to a network device with a given MAC address. You can use this option multiple times. The syntax is ifname=interface:MAC . For example: Note The ifname= option is the only supported way to set custom network interface names during installation. inst.dhcpclass= The inst.dhcpclass= option specifies the DHCP vendor class identifier. The dhcpd service recognizes this value as vendor-class-identifier . The default value is anaconda-USD(uname -srm) . To ensure the inst.dhcpclass= option is applied correctly, request network activation during the early stage of installation by also adding the ip option. inst.waitfornet= Using the inst.waitfornet=SECONDS boot option causes the installation system to wait for network connectivity before installation. The value given in the SECONDS argument specifies the maximum amount of time to wait for network connectivity before timing out and continuing the installation process even if network connectivity is not present. vlan= Use the vlan= option to configure a Virtual LAN (VLAN) device on a specified interface with a given name. The syntax is vlan=name:interface . For example: This configures a VLAN device named vlan5 on the enp0s1 interface. The name can take the following forms: VLAN_PLUS_VID: vlan0005 VLAN_PLUS_VID_NO_PAD: vlan5 DEV_PLUS_VID: enp0s1.0005 DEV_PLUS_VID_NO_PAD: enp0s1.5 bond= Use the bond= option to configure a bonding device with the following syntax: bond=name[:interfaces][:options] . Replace name with the bonding device name, interfaces with a comma-separated list of physical (Ethernet) interfaces, and options with a comma-separated list of bonding options. For example: For a list of available options, execute the modinfo bonding command. team= Use the team= option to configure a team device with the following syntax: team=name:interfaces . Replace name with the desired name of the team device and interfaces with a comma-separated list of physical (Ethernet) devices to be used as underlying interfaces in the team device. For example: Important NIC teaming is deprecated in Red Hat Enterprise Linux 9. Consider using the network bonding driver as an alternative. For details, see Configuring a network bond . bridge= Use the bridge= option to configure a bridge device with the following syntax: bridge=name:interfaces . Replace name with the desired name of the bridge device and interfaces with a comma-separated list of physical (Ethernet) devices to be used as underlying interfaces in the bridge device. For example: Additional resources Configuring and managing networking C.3. Console boot options This section describes how to configure boot options for your console, monitor display, and keyboard. console= Use the console= option to specify a device that you want to use as the primary console. For example, to use a console on the first serial port, use console=ttyS0 . When using the console= argument, the installation starts with a text UI. If you must use the console= option multiple times, the boot message is displayed on all specified console. However, the installation program uses only the last specified console. For example, if you specify console=ttyS0 console=ttyS1 , the installation program uses ttyS1 . inst.lang= Use the inst.lang= option to set the language that you want to use during the installation. To view the list of locales, enter the command locale -a | grep _ or the localectl list-locales | grep _ command. inst.geoloc= Use the inst.geoloc= option to configure geolocation usage in the installation program. Geolocation is used to preset the language and time zone, and uses the following syntax: inst.geoloc=value . The value can be any of the following parameters: Disable geolocation: inst.geoloc=0 Use the Fedora GeoIP API: inst.geoloc=provider_fedora_geoip . This option is deprecated. Use the Hostip.info GeoIP API: inst.geoloc=provider_hostip . This option is deprecated. inst.keymap= Use the inst.keymap= option to specify the keyboard layout to use for the installation. inst.cmdline Use the inst.cmdline option to force the installation program to run in command-line mode. This mode does not allow any interaction, and you must specify all options in a Kickstart file or on the command line. inst.graphical Use the inst.graphical option to force the installation program to run in graphical mode. The graphical mode is the default. inst.text Use the inst.text option to force the installation program to run in text mode instead of graphical mode. inst.noninteractive Use the inst.noninteractive boot option to run the installation program in a non-interactive mode. User interaction is not permitted in the non-interactive mode, and inst.noninteractive you can use the inst.nointeractive option with a graphical or text installation. When you use the inst.noninteractive option in text mode, it behaves the same as the inst.cmdline option. inst.resolution= Use the inst.resolution= option to specify the screen resolution in graphical mode. The format is NxM , where N is the screen width and M is the screen height (in pixels). The recommended resolution is 1024x768. inst.vnc Use the inst.vnc option to run the graphical installation using Virtual Network Computing (VNC). You must use a VNC client application to interact with the installation program. When VNC sharing is enabled, multiple clients can connect. A system installed using VNC starts in text mode. inst.vncpassword= Use the inst.vncpassword= option to set a password on the VNC server that is used by the installation program. inst.vncconnect= Use the inst.vncconnect= option to connect to a listening VNC client at the given host location, for example, inst.vncconnect=<host>[:<port>] The default port is 5900. You can use this option by entering the command vncviewer -listen . inst.xdriver= Use the inst.xdriver= option to specify the name of the X driver to use both during installation and on the installed system. inst.usefbx Use the inst.usefbx option to prompt the installation program to use the frame buffer X driver instead of a hardware-specific driver. This option is equivalent to the inst.xdriver=fbdev option. modprobe.blacklist= Use the modprobe.blacklist= option to blocklist or completely disable one or more drivers. Drivers (mods) that you disable using this option cannot load when the installation starts. After the installation finishes, the installed system retains these settings. You can find a list of the blocklisted drivers in the /etc/modprobe.d/ directory. Use a comma-separated list to disable multiple drivers. For example: Note You can use modprobe.blacklist in combination with the different command line options. For example, use it with the inst.dd option to ensure that an updated version of an existing driver is loaded from a driver update disc: inst.xtimeout= Use the inst.xtimeout= option to specify the timeout in seconds for starting X server. inst.sshd Use the inst.sshd option to start the sshd service during installation, so that you can connect to the system during the installation using SSH, and monitor the installation progress. For more information about SSH, see the ssh(1) man page on your system. By default, the sshd option is automatically started only on the 64-bit IBM Z architecture. On other architectures, sshd is not started unless you use the inst.sshd option. Note During installation, the root account has no password by default. You can set a root password during installation with the sshpw Kickstart command. inst.kdump_addon= Use the inst.kdump_addon= option to enable or disable the Kdump configuration screen (add-on) in the installation program. This screen is enabled by default; use inst.kdump_addon=off to disable it. Disabling the add-on disables the Kdump screens in both the graphical and text-based interface as well as the %addon com_redhat_kdump Kickstart command. C.4. Debug boot options This section describes the options you can use when debugging issues. inst.rescue Use the inst.rescue option to run the rescue environment for diagnosing and fixing systems. For more information, see the Red Hat Knowledgebase solution repair a filesystem in rescue mode . inst.updates= Use the inst.updates= option to specify the location of the updates.img file that you want to apply during installation. The updates.img file can be derived from one of several sources. Table C.4. updates.img file sources Source Description Example Updates from a network Specify the network location of updates.img . This does not require any modification to the installation tree. To use this method, edit the kernel command line to include inst.updates . inst.updates=http://website.com/path/to/updates.img . Updates from a disk image Save an updates.img on a floppy drive or a USB key. This can be done only with an ext2 filesystem type of updates.img . To save the contents of the image on your floppy drive, insert the floppy disc and run the command. dd if=updates.img of=/dev/fd0 bs=72k count=20 . To use a USB key or flash media, replace /dev/fd0 with the device name of your USB flash drive. Updates from an installation tree If you are using a CD, disk, HTTP, or FTP install, save the updates.img in the installation tree so that all installations can detect the .img file. The file name must be updates.img . For NFS installs, save the file in the images/ directory, or in the RHupdates/ directory. inst.syslog= Sends log messages to the syslog process on the specified host when the installation starts. You can use inst.syslog= only if the remote syslog process is configured to accept incoming connections. inst.virtiolog= Use the inst.virtiolog= option to specify which virtio port (a character device at /dev/virtio-ports/name ) to use for forwarding logs. The default value is org.fedoraproject.anaconda.log.0 . rd.live.ram Copies the stage 2 image in images/install.img into RAM. Note that this increases the memory required for installation by the size of the image which is usually between 400 and 800MB. inst.nokill Prevent the installation program from rebooting when a fatal error occurs, or at the end of the installation process. Use it capture installation logs which would be lost upon reboot. inst.noshell Prevent a shell on terminal session 2 (tty2) during installation. inst.notmux Prevent the use of tmux during installation. The output is generated without terminal control characters and is meant for non-interactive uses. inst.remotelog= Sends all the logs to a remote host:port using a TCP connection. The connection is retired if there is no listener and the installation proceeds as normal. C.5. Storage boot options This section describes the options you can specify to customize booting from a storage device. inst.nodmraid Disables dmraid support. Warning Use this option with caution. If you have a disk that is incorrectly identified as part of a firmware RAID array, it might have some stale RAID metadata on it that must be removed using the appropriate tool such as, dmraid or wipefs . inst.nompath Disables support for multipath devices. Use this option only if your system has a false-positive that incorrectly identifies a normal block device as a multipath device. Warning Use this option with caution. Do not use this option with multipath hardware. Using this option to install to a single path of a multipath device is not supported. inst.gpt Forces the installation program to install partition information to a GUID Partition Table (GPT) instead of a Master Boot Record (MBR). This option is not valid on UEFI-based systems, unless they are in BIOS compatibility mode. Normally, BIOS-based systems and UEFI-based systems in BIOS compatibility mode attempt to use the MBR schema for storing partitioning information, unless the disk is 2^32 sectors in size or larger. Disk sectors are typically 512 bytes in size, meaning that this is usually equivalent to 2 TiB. The inst.gpt boot option allows a GPT to be written to smaller disks. inst.wait_for_disks= Use the inst.wait_for_disks= option to specify the number of seconds installation program to wait for disk devices to appear at the beginning of the installation. Use this option when you use the OEMDRV-labeled device to automatically load the Kickstart file or the kernel drivers but the device takes longer time to appear during the boot process. By default, installation program waits for 5 seconds. Use 0 seconds to minimize the delay. C.6. Deprecated boot options This section contains information about deprecated boot options. These options are still accepted by the installation program but they are deprecated and are scheduled to be removed in a future release of Red Hat Enterprise Linux. method The method option is an alias for inst.repo . dns Use nameserver instead of dns . Note that nameserver does not accept comma-separated lists; use multiple nameserver options instead. ksdevice Table C.5. Values for the ksdevice boot option Value Information Not present N/A ksdevice=link Ignored as this option is the same as the default behavior ksdevice=bootif Ignored as this option is the default if BOOTIF= is present ksdevice=ibft Replaced with ip=ibft . See ip for details ksdevice=<MAC> Replaced with BOOTIF=USD{MAC/:/-} ksdevice=<DEV> Replaced with bootdev C.7. Removed boot options This section contains the boot options that have been removed from Red Hat Enterprise Linux. Note dracut provides advanced boot options. For more information about dracut , see the dracut.cmdline(7) man page on your system. askmethod, asknetwork initramfs is completely non-interactive, so the askmethod and asknetwork options have been removed. Use inst.repo or specify the appropriate network options. blacklist, nofirewire The modprobe option now handles blocklisting kernel modules. Use modprobe.blacklist=<mod1>,<mod2> . You can blocklist the firewire module by using modprobe.blacklist=firewire_ohci . inst.headless= The headless= option specified that the system that is being installed to does not have any display hardware, and that the installation program is not required to look for any display hardware. inst.decorated The inst.decorated option was used to specify the graphical installation in a decorated window. By default, the window is not decorated, so it does not have a title bar, resize controls, and so on. This option was no longer required. repo=nfsiso Use the inst.repo=nfs: option. serial Use the console=ttyS0 option. updates Use the inst.updates option. essid, wepkey, wpakey Dracut does not support wireless networking. ethtool This option was no longer required. gdb This option was removed because many options are available for debugging dracut-based initramfs . inst.mediacheck Use the dracut option rd.live.check option. ks=floppy Use the inst.ks=hd:<device> option. display For a remote display of the UI, use the inst.vnc option. utf8 This option was no longer required because the default TERM setting behaves as expected. noipv6 ipv6 is built into the kernel and cannot be removed by the installation program. You can disable ipv6 by using ipv6.disable=1 . This setting is used by the installed system. upgradeany This option was no longer required because the installation program no longer handles upgrades. netmask, gateway, hostname The netmask , gateway , and hostname options are provided as part of the ip option. ip=bootif A PXE-supplied BOOTIF option is used automatically, so there is no requirement to use ip=bootif . inst.zram The zram.service cannot be run anymore. See zram-generator for more information. inst.singlelang The single language mode is not supported anymore. inst.repo=hd:<device>:<path> for installable tree This option cannot be used with installable tree but only with an ISO file. inst.loglevel The log level is always set to debug.
[ "inst.stage2=https://hostname/path_to_install_image/ inst.noverifyssl", "inst.repo=https://hostname/path_to_install_repository/ inst.noverifyssl", "inst.stage2.all inst.stage2=http://hostname1/path_to_install_tree/ inst.stage2=http://hostname2/path_to_install_tree/ inst.stage2=http://hostname3/path_to_install_tree/", "[PROTOCOL://][USERNAME[:PASSWORD]@]HOST[:PORT]", "inst.nosave=Input_ks,logs", "ifname=eth0:01:23:45:67:89:ab", "vlan=vlan5:enp0s1", "bond=bond0:enp0s1,enp0s2:mode=active-backup,tx_queues=32,downdelay=5000", "team=team0:enp0s1,enp0s2", "bridge=bridge0:enp0s1,enp0s2", "modprobe.blacklist=ahci,firewire_ohci", "modprobe.blacklist=virtio_blk" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/interactively_installing_rhel_from_installation_media/custom-boot-options_rhel-installer
Chapter 5. Snapshot management
Chapter 5. Snapshot management As a storage administrator, being familiar with Ceph's snapshotting feature can help you manage the snapshots and clones of images stored in the Red Hat Ceph Storage cluster. 5.1. Prerequisites A running Red Hat Ceph Storage cluster. 5.2. Ceph block device snapshots A snapshot is a read-only copy of the state of an image at a particular point in time. One of the advanced features of Ceph block devices is that you can create snapshots of the images to retain a history of an image's state. Ceph also supports snapshot layering, which allows you to clone images quickly and easily, for example a virtual machine image. Ceph supports block device snapshots using the rbd command and many higher level interfaces, including QEMU , libvirt , OpenStack and CloudStack. Note If a snapshot is taken while I/O is occurring, then the snapshot might not get the exact or latest data of the image and the snapshot might have to be cloned to a new image to be mountable. Red Hat recommends stopping I/O before taking a snapshot of an image. If the image contains a filesystem, the filesystem must be in a consistent state before taking a snapshot. To stop I/O you can use fsfreeze command. For virtual machines, the qemu-guest-agent can be used to automatically freeze filesystems when creating a snapshot. Figure 5.1. Ceph Block device snapshots Additional Resources See the fsfreeze(8) man page for more details. 5.3. The Ceph user and keyring When cephx is enabled, you must specify a user name or ID and a path to the keyring containing the corresponding key for the user. Note cephx is enabled by default. You might also add the CEPH_ARGS environment variable to avoid re-entry of the following parameters: Syntax Example Tip Add the user and secret to the CEPH_ARGS environment variable so that you do not need to enter them each time. 5.4. Creating a block device snapshot Create a snapshot of a Ceph block device. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Specify the snap create option, the pool name and the image name: Method 1: Syntax Example Method 2: Syntax Example 5.5. Listing the block device snapshots List the block device snapshots. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Specify the pool name and the image name: Syntax Example 5.6. Rolling back a block device snapshot Rollback a block device snapshot. Note Rolling back an image to a snapshot means overwriting the current version of the image with data from a snapshot. The time it takes to execute a rollback increases with the size of the image. It is faster to clone from a snapshot than to rollback an image to a snapshot, and it is the preferred method of returning to a pre-existing state. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Specify the snap rollback option, the pool name, the image name and the snap name: Syntax Example 5.7. Deleting a block device snapshot Delete a snapshot for Ceph block devices. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To delete a block device snapshot, specify the snap rm option, the pool name, the image name and the snapshot name: Syntax Example Important If an image has any clones, the cloned images retain reference to the parent image snapshot. To delete the parent image snapshot, you must flatten the child images first. Note Ceph OSD daemons delete data asynchronously, so deleting a snapshot does not free up the disk space immediately. Additional Resources See the Flattening cloned images in the Red Hat Ceph Storage Block Device Guide for details. 5.8. Purging the block device snapshots Purge block device snapshots. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Specify the snap purge option and the image name on a specific pool: Syntax Example 5.9. Renaming a block device snapshot Rename a block device snapshot. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To rename a snapshot: Syntax Example This renames snap1 snapshot of the dataset image on the data pool to snap2 . Execute the rbd help snap rename command to display additional details on renaming snapshots. 5.10. Ceph block device layering Ceph supports the ability to create many copy-on-write (COW) or copy-on-read (COR) clones of a block device snapshot. Snapshot layering enables Ceph block device clients to create images very quickly. For example, you might create a block device image with a Linux VM written to it. Then, snapshot the image, protect the snapshot, and create as many clones as you like. A snapshot is read-only, so cloning a snapshot simplifies semantics- making it possible to create clones rapidly. Figure 5.2. Ceph Block device layering Note The terms parent and child mean a Ceph block device snapshot, parent, and the corresponding image cloned from the snapshot, child. These terms are important for the command line usage below. Each cloned image, the child, stores a reference to its parent image, which enables the cloned image to open the parent snapshot and read it. This reference is removed when the clone is flattened that is, when information from the snapshot is completely copied to the clone. A clone of a snapshot behaves exactly like any other Ceph block device image. You can read to, write from, clone, and resize the cloned images. There are no special restrictions with cloned images. However, the clone of a snapshot refers to the snapshot, so you MUST protect the snapshot before you clone it. A clone of a snapshot can be a copy-on-write (COW) or copy-on-read (COR) clone. Copy-on-write (COW) is always enabled for clones while copy-on-read (COR) has to be enabled explicitly. Copy-on-write (COW) copies data from the parent to the clone when it writes to an unallocated object within the clone. Copy-on-read (COR) copies data from the parent to the clone when it reads from an unallocated object within the clone. Reading data from a clone will only read data from the parent if the object does not yet exist in the clone. Rados block device breaks up large images into multiple objects. The default is set to 4 MB and all copy-on-write (COW) and copy-on-read (COR) operations occur on a full object, that is writing 1 byte to a clone will result in a 4 MB object being read from the parent and written to the clone if the destination object does not already exist in the clone from a COW/COR operation. Whether or not copy-on-read (COR) is enabled, any reads that cannot be satisfied by reading an underlying object from the clone will be rerouted to the parent. Since there is practically no limit to the number of parents, meaning that you can clone a clone, this reroute continues until an object is found or you hit the base parent image. If copy-on-read (COR) is enabled, any reads that fail to be satisfied directly from the clone result in a full object read from the parent and writing that data to the clone so that future reads of the same extent can be satisfied from the clone itself without the need of reading from the parent. This is essentially an on-demand, object-by-object flatten operation. This is specially useful when the clone is in a high-latency connection away from it's parent, that is the parent in a different pool, in another geographical location. Copy-on-read (COR) reduces the amortized latency of reads. The first few reads will have high latency because it will result in extra data being read from the parent, for example, you read 1 byte from the clone but now 4 MB has to be read from the parent and written to the clone, but all future reads will be served from the clone itself. To create copy-on-read (COR) clones from snapshot you have to explicitly enable this feature by adding rbd_clone_copy_on_read = true under [global] or [client] section in the ceph.conf file. Additional Resources For more information on flattening , see the Flattening cloned images section in the Red Hat Ceph Storage Block Device Gudie . 5.11. Protecting a block device snapshot Clones access the parent snapshots. All clones would break if a user inadvertently deleted the parent snapshot. You can set the set-require-min-compat-client parameter to greater than or equal to mimic versions of Ceph. Example This creates clone v2, by default. However, clients older than mimic cannot access those block device images. Note Clone v2 does not require protection of snapshots. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Specify POOL_NAME , IMAGE_NAME , and SNAP_SHOT_NAME in the following command: Syntax Example Note You cannot delete a protected snapshot. 5.12. Cloning a block device snapshot Clone a block device snapshot to create a read or write child image of the snapshot within the same pool or in another pool. One use case would be to maintain read-only images and snapshots as templates in one pool, and writable clones in another pool. Note Clone v2 does not require protection of snapshots. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To clone a snapshot, you need to specify the parent pool, snapshot, child pool and image name: Syntax Example 5.13. Unprotecting a block device snapshot Before you can delete a snapshot, you must unprotect it first. Additionally, you may NOT delete snapshots that have references from clones. You must flatten each clone of a snapshot, before you can delete the snapshot. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Run the following commands: Syntax Example 5.14. Listing the children of a snapshot List the children of a snapshot. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To list the children of a snapshot, execute the following: Syntax Example 5.15. Flattening cloned images Cloned images retain a reference to the parent snapshot. When you remove the reference from the child clone to the parent snapshot, you effectively "flatten" the image by copying the information from the snapshot to the clone. The time it takes to flatten a clone increases with the size of the snapshot. Because a flattened image contains all the information from the snapshot, a flattened image will use more storage space than a layered clone. Note If the deep flatten feature is enabled on an image, the image clone is dissociated from its parent by default. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To delete a parent image snapshot associated with child images, you must flatten the child images first: Syntax Example
[ "rbd --id USER_ID --keyring=/path/to/secret [commands] rbd --name USERNAME --keyring=/path/to/secret [commands]", "rbd --id admin --keyring=/etc/ceph/ceph.keyring [commands] rbd --name client.admin --keyring=/etc/ceph/ceph.keyring [commands]", "rbd --pool POOL_NAME snap create --snap SNAP_NAME IMAGE_NAME", "rbd --pool pool1 snap create --snap snap1 image1", "rbd snap create POOL_NAME / IMAGE_NAME @ SNAP_NAME", "rbd snap create pool1/image1@snap1", "rbd --pool POOL_NAME --image IMAGE_NAME snap ls rbd snap ls POOL_NAME / IMAGE_NAME", "rbd --pool pool1 --image image1 snap ls rbd snap ls pool1/image1", "rbd --pool POOL_NAME snap rollback --snap SNAP_NAME IMAGE_NAME rbd snap rollback POOL_NAME / IMAGE_NAME @ SNAP_NAME", "rbd --pool pool1 snap rollback --snap snap1 image1 rbd snap rollback pool1/image1@snap1", "rbd --pool POOL_NAME snap rm --snap SNAP_NAME IMAGE_NAME rbd snap rm POOL_NAME -/ IMAGE_NAME @ SNAP_NAME", "rbd --pool pool1 snap rm --snap snap2 image1 rbd snap rm pool1/image1@snap1", "rbd --pool POOL_NAME snap purge IMAGE_NAME rbd snap purge POOL_NAME / IMAGE_NAME", "rbd --pool pool1 snap purge image1 rbd snap purge pool1/image1", "rbd snap rename POOL_NAME / IMAGE_NAME @ ORIGINAL_SNAPSHOT_NAME POOL_NAME / IMAGE_NAME @ NEW_SNAPSHOT_NAME", "rbd snap rename data/dataset@snap1 data/dataset@snap2", "ceph osd set-require-min-compat-client mimic", "rbd --pool POOL_NAME snap protect --image IMAGE_NAME --snap SNAPSHOT_NAME rbd snap protect POOL_NAME / IMAGE_NAME @ SNAPSHOT_NAME", "rbd --pool pool1 snap protect --image image1 --snap snap1 rbd snap protect pool1/image1@snap1", "rbd snap --pool POOL_NAME --image PARENT_IMAGE --snap SNAP_NAME --dest-pool POOL_NAME --dest CHILD_IMAGE_NAME rbd clone POOL_NAME / PARENT_IMAGE @ SNAP_NAME POOL_NAME / CHILD_IMAGE_NAME", "rbd clone --pool pool1 --image image1 --snap snap2 --dest-pool pool2 --dest childimage1 rbd clone pool1/image1@snap1 pool1/childimage1", "rbd --pool POOL_NAME snap unprotect --image IMAGE_NAME --snap SNAPSHOT_NAME rbd snap unprotect POOL_NAME / IMAGE_NAME @ SNAPSHOT_NAME", "rbd --pool pool1 snap unprotect --image image1 --snap snap1 rbd snap unprotect pool1/image1@snap1", "rbd --pool POOL_NAME children --image IMAGE_NAME --snap SNAP_NAME rbd children POOL_NAME / IMAGE_NAME @ SNAPSHOT_NAME", "rbd --pool pool1 children --image image1 --snap snap1 rbd children pool1/image1@snap1", "rbd --pool POOL_NAME flatten --image IMAGE_NAME rbd flatten POOL_NAME / IMAGE_NAME", "rbd --pool pool1 flatten --image childimage1 rbd flatten pool1/childimage1" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/block_device_guide/snapshot-management
Chapter 27. Storage Driver Updates
Chapter 27. Storage Driver Updates The hpsa driver has been updated to version 3.4.4-1-RH4. The qla2xxx driver has been updated to version 8.07.00.18.07.2-k. The lpfc driver has been updated to version 10.7.0.1. The megaraid_sas driver has been updated to version 06.807.10.00. The fnic driver has been updated to version 1.6.0.17. The mpt2sas driver has been updated to version 20.100.00.00. The mpt3sas driver has been updated to version 9.100.00.00. The Emulex be2iscsi driver has been updated to version 10.6.0.0r. The aacraid driver has been updated to version 1.2. The bnx2i driver has been updated to version 2.7.10.1. The bnx2fc driver has been updated to version 2.4.2.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/storage_drivers
Chapter 8. Automation content navigator configuration settings
Chapter 8. Automation content navigator configuration settings As a content creator, you can configure Automation content navigator to suit your development environment. 8.1. Creating an Automation content navigator settings file You can alter the default Automation content navigator settings through: The command line Within a settings file As an environment variable Automation content navigator checks for a settings file in the following order and uses the first match: ANSIBLE_NAVIGATOR_CONFIG - The settings file path environment variable if set. ./ansible-navigator.<ext> - The settings file within the current project directory, with no dot in the file name. \~/.ansible-navigator.<ext> - Your home directory, with a dot in the file name. Consider the following when you create an Automation content navigator settings file: The settings file can be in JSON or YAML format. For settings in JSON format, the extension must be .json . For settings in YAML format, the extension must be .yml or .yaml . The project and home directories can only contain one settings file each. If Automation content navigator finds more than one settings file in either directory, it results in an error. You can copy the example settings file below into one of those paths to start your ansible-navigator settings file. --- ansible-navigator: # ansible: # config: /tmp/ansible.cfg # cmdline: "--forks 15" # inventories: # - /tmp/test_inventory.yml # playbook: /tmp/test_playbook.yml # ansible-runner: # artifact-dir: /tmp/test1 # rotate-artifacts-count: 10 # timeout: 300 # app: run # collection-doc-cache-path: /tmp/cache.db # color: # enable: False # osc4: False # editor: # command: vim_from_setting # console: False # documentation: # plugin: # name: shell # type: become # execution-environment: # container-engine: podman # enabled: False # environment-variables: # pass: # - ONE # - TWO # - THREE # set: # KEY1: VALUE1 # KEY2: VALUE2 # KEY3: VALUE3 # image: test_image:latest # pull-policy: never # volume-mounts: # - src: "/test1" # dest: "/test1" # label: "Z" # help-config: True # help-doc: True # help-inventory: True # help-playbook: False # inventory-columns: # - ansible_network_os # - ansible_network_cli_ssh_type # - ansible_connection logging: # append: False level: critical # file: /tmp/log.txt # mode: stdout # playbook-artifact: # enable: True # replay: /tmp/test_artifact.json # save-as: /tmp/test_artifact.json 8.2. Automation content navigator general settings The following table describes each general parameter and setting options for Automation content navigator. Table 8.1. Automation content navigator general parameters settings Parameter Description Setting options ansible-runner-artifact-dir The directory path to store artifacts generated by ansible-runner. Default: No default value set CLI: --rad or --ansible-runner-artifact-dir ENV: ANSIBLE_NAVIGATOR_ANSIBLE_RUNNER_ARTIFACT_DIR Settings file: ansible-navigator: ansible-runner: artifact-dir: ansible-runner-rotate-artifacts-count Keep ansible-runner artifact directories, for last n runs. If set to 0, artifact directories are not deleted. Default: No default value set CLI: --rac or --ansible-runner-rotate-artifacts-count ENV: ANSIBLE_NAVIGATOR_ANSIBLE_RUNNER_ROTATE_ARTIFACTS_COUNT Settings file: ansible-navigator: ansible-runner: rotate-artifacts-count: ansible-runner-timeout The timeout value after which ansible-runner force stops the execution. Default: No default value set CLI: --rt or --ansible-runner-timeout ENV: ANSIBLE_NAVIGATOR_ANSIBLE_RUNNER_TIMEOUT Settings file: ansible-navigator: ansible-runner: timeout: app Entry point for Automation content navigator. Choices : collections , config , doc , images , inventory , replay , run or welcome Default : welcome CLI example : ansible-navigator collections ENV : ANSIBLE_NAVIGATOR_APP Settings file: ansible-navigator: app: cmdline Extra parameters passed to the corresponding command. Default : No default value CLI : positional ENV : ANSIBLE_NAVIGATOR_CMDLINE Settings file: ansible-navigator: ansible: cmdline: collection-doc-cache-path The path to the collection doc cache. Default : USDHOME/.cache/ansible-navigator/collection_doc_cache.db CLI : --cdcp or --collection-doc-cache-path ENV : ANSIBLE_NAVIGATOR_COLLECTION_DOC_CACHE_PATH Settings file: ansible-navigator: collection-doc-cache-path: container-engine Specify the container engine ( auto = podman then docker ). Choices: auto , podman or docker Default: auto CLI: --ce or --container-engine ENV: ANSIBLE_NAVIGATOR_CONTAINER_ENGINE Settings file: ansible-navigator: execution-environment: container-engine: display-color Enable the use of color in the display. Choices: True or False Default: True CLI: --dc or --display-color ENV: NO_COLOR Settings file: ansible-navigator: color: enable: editor-command Specify the editor used by Automation content navigator Default:* vi +{line_number} {filename} CLI: --ecmd or --editor-command ENV: ANSIBLE_NAVIGATOR_EDITOR_COMMAND Settings file: ansible-navigator: editor: command: editor-console Specify if the editor is console based. Choices: True or False Default: True CLI: --econ or --editor-console ENV: ANSIBLE_NAVIGATOR_EDITOR_CONSOLE Settings file: ansible-navigator: editor: console: execution-environment Enable or disable the use of an automation execution environment. Choices: True or False Default: True CLI: --ee or --execution-environment ENV: * ANSIBLE_NAVIGATOR_EXECUTION_ENVIRONMENT Settings file: ansible-navigator: execution-environment: enabled: execution-environment-image Specify the name of the automation execution environment image. Default: quay.io/ansible/ansible-runner:devel CLI: --eei or --execution-environment-image ENV: ANSIBLE_NAVIGATOR_EXECUTION_ENVIRONMENT_IMAGE Settings file: ansible-navigator: execution-environment: image: execution-environment-volume-mounts Specify volume to be bind mounted within an automation execution environment ( --eev /home/user/test:/home/user/test:Z ) Default: No default value set CLI: --eev or --execution-environment-volume-mounts ENV: ANSIBLE_NAVIGATOR_EXECUTION_ENVIRONMENT_VOLUME_MOUNTS Settings file: ansible-navigator: execution-environment: volume-mounts: log-append Specify if log messages should be appended to an existing log file, otherwise a new log file is created per session. Choices: True or False Default: True CLI: --la or --log-append ENV: ANSIBLE_NAVIGATOR_LOG_APPEND Settings file: ansible-navigator: logging: append: log-file Specify the full path for the Automation content navigator log file. Default: USDPWD/ansible-navigator.log CLI: --lf or --log-file ENV: ANSIBLE_NAVIGATOR_LOG_FILE Settings file: ansible-navigator: logging: file: log-level Specify the Automation content navigator log level. Choices: debug , info , warning , error or critical Default: warning CLI: --ll or --log-level ENV: ANSIBLE_NAVIGATOR_LOG_LEVEL Settings file: ansible-navigator: logging: level: mode Specify the user-interface mode. Choices: stdout or interactive Default: interactive CLI: -m or --mode ENV: ANSIBLE_NAVIGATOR_MODE Settings file: ansible-navigator: mode: osc4 Enable or disable terminal color changing support with OSC 4. Choices: True or False Default: True CLI: --osc4 or --osc4 ENV: ANSIBLE_NAVIGATOR_OSC4 Settings file: ansible-navigator: color: osc4: pass-environment-variable Specify an exiting environment variable to be passed through to and set within the automation execution environment ( --penv MY_VAR ) Default: No default value set CLI: --penv or --pass-environment-variable ENV: ANSIBLE_NAVIGATOR_PASS_ENVIRONMENT_VARIABLES Settings file: ansible-navigator: execution-environment: environment-variables: pass: pull-policy Specify the image pull policy. always - Always pull the image missing - Pull if not locally available never - Never pull the image tag - If the image tag is latest always pull the image, otherwise pull if not locally available Choices: always , missing , never , or tag Default: tag CLI: --pp or --pull-policy ENV: ANSIBLE_NAVIGATOR_PULL_POLICY Settings file: ansible-navigator: execution-environment: pull-policy: set-environment-variable Specify an environment variable and a value to be set within the automation execution environment (--senv MY_VAR=42 ) Default: No default value set CLI: --senv or --set-environment-variable ENV: ANSIBLE_NAVIGATOR_SET_ENVIRONMENT_VARIABLES Settings file: ansible-navigator: execution-environment: environment-variables: set: 8.3. Automation content navigator config subcommand settings The following table describes each parameter and setting options for the Automation content navigator config subcommand. Table 8.2. Automation content navigator config subcommand parameters settings Parameter Description Setting options config Specify the path to the Ansible configuration file. Default: No default value set CLI: -c or --config ENV: ANSIBLE_CONFIG Settings file: ansible-navigator: ansible: config: path: help-config Help options for the ansible-config command in stdout mode. Choices: * True or False Default: False CLI: --hc or --help-config ENV: ANSIBLE_NAVIGATOR_HELP_CONFIG Settings file: ansible-navigator: help-config: 8.4. Automation content navigator doc subcommand settings The following table describes each parameter and setting options for the Automation content navigator doc subcommand. Table 8.3. Automation content navigator doc subcommand parameters settings Parameter Description Setting options help-doc Help options for the ansible-doc command in stdout mode. Choices: True or False Default: False CLI: --hd or --help-doc ENV: ANSIBLE_NAVIGATOR_HELP_DOC Settings file: ansible-navigator: help-doc: plugin-name Specify the plugin name. Default: No default value set CLI: positional ENV: ANSIBLE_NAVIGATOR_PLUGIN_NAME Settings file: ansible-navigator: documentation: plugin: name: plugin-type Specify the plugin type. Choices: become , cache , callback , cliconf , connection , httpapi , inventory , lookup , module , netconf , shell , strategy , or vars Default: module CLI: -t or ----type ENV: ANSIBLE_NAVIGATOR_PLUGIN_TYPE Settings file: ansible-navigator: documentation: plugin: type: 8.5. Automation content navigator inventory subcommand settings The following table describes each parameter and setting options for the Automation content navigator inventory subcommand. Table 8.4. Automation content navigator inventory subcommand parameters settings Parameter Description Setting options help-inventory Help options for the ansible-inventory command in stdout mode. Choices: True or False Default: False CLI: --hi or --help-inventory ENV: ANSIBLE_NAVIGATOR_INVENTORY_DOC Settings file: ansible-navigator: help-inventory: inventory Specify an inventory file path or comma separated host list. Default: no default value set CLI: --i or --inventory ENV: ANSIBLE_NAVIGATOR_INVENTORIES Settings file: ansible-navigator: inventories: inventory-column Specify a host attribute to show in the inventory view. Default: No default value set CLI: --ic or --inventory-column ENV: * ANSIBLE_NAVIGATOR_INVENTORY_COLUMNS Settings file: ansible-navigator: inventory-columns: 8.6. Automation content navigator replay subcommand settings The following table describes each parameter and setting options for the Automation content navigator replay subcommand. Table 8.5. Automation content navigator replay subcommand parameters settings Parameter Description Setting options playbook-artifact-replay Specify the path for the playbook artifact to replay. Default: No default value set CLI: positional ENV: ANSIBLE_NAVIGATOR_PLAYBOOK_ARTIFACT_REPLAY Settings file: ansible-navigator: playbook-artifact: replay: 8.7. Automation content navigator run subcommand settings The following table describes each parameter and setting options for the Automation content navigator run subcommand. Table 8.6. Automation content navigator run subcommand parameters settings Parameter Description Setting options playbook-artifact-replay Specify the path for the playbook artifact to replay. Default: No default value set CLI: positional ENV: ANSIBLE_NAVIGATOR_PLAYBOOK_ARTIFACT_REPLAY Settings file: ansible-navigator: playbook-artifact: replay: help-playbook Help options for the ansible-playbook command in stdout mode. Choices: True or False Default: False CLI: --hp or --help-playbook ENV: ANSIBLE_NAVIGATOR_HELP_PLAYBOOK Settings file: ansible-navigator: help-playbook: inventory Specify an inventory file path or comma separated host list. Default: no default value set CLI: --i or --inventory ENV: ANSIBLE_NAVIGATOR_INVENTORIES Settings file: ansible-navigator: inventories: inventory-column Specify a host attribute to show in the inventory view. Default: No default value set CLI: --ic or --inventory-column ENV: * ANSIBLE_NAVIGATOR_INVENTORY_COLUMNS Settings file: ansible-navigator: inventory-columns: playbook Specify the playbook name. Default: No default value set CLI: positional ENV: ANSIBLE_NAVIGATOR_PLAYBOOK Settings file: * ansible-navigator: ansible: playbook: playbook-artifact-enable Enable or disable the creation of artifacts for completed playbooks. Note: not compatible with --mode stdout when playbooks require user input. Choices: True or False Default: True CLI: --pae or --playbook-artifact-enable ENV: ANSIBLE_NAVIGATOR_PLAYBOOK_ARTIFACT_ENABLE Settings file: ansible-navigator: playbook-artifact: enable: playbook-artifact-save-as Specify the name for artifacts created from completed playbooks. Default: {playbook_dir}/{playbook_name}-artifact-{ts_utc}.json CLI: --pas or --playbook-artifact-save-as ENV: ANSIBLE_NAVIGATOR_PLAYBOOK_ARTIFACT_SAVE_AS Settings file: ansible-navigator: playbook-artifact: save-as:
[ "--- ansible-navigator: # ansible: # config: /tmp/ansible.cfg # cmdline: \"--forks 15\" # inventories: # - /tmp/test_inventory.yml # playbook: /tmp/test_playbook.yml # ansible-runner: # artifact-dir: /tmp/test1 # rotate-artifacts-count: 10 # timeout: 300 # app: run # collection-doc-cache-path: /tmp/cache.db # color: # enable: False # osc4: False # editor: # command: vim_from_setting # console: False # documentation: # plugin: # name: shell # type: become # execution-environment: # container-engine: podman # enabled: False # environment-variables: # pass: # - ONE # - TWO # - THREE # set: # KEY1: VALUE1 # KEY2: VALUE2 # KEY3: VALUE3 # image: test_image:latest # pull-policy: never # volume-mounts: # - src: \"/test1\" # dest: \"/test1\" # label: \"Z\" # help-config: True # help-doc: True # help-inventory: True # help-playbook: False # inventory-columns: # - ansible_network_os # - ansible_network_cli_ssh_type # - ansible_connection logging: # append: False level: critical # file: /tmp/log.txt # mode: stdout # playbook-artifact: # enable: True # replay: /tmp/test_artifact.json # save-as: /tmp/test_artifact.json", "ansible-navigator: ansible-runner: artifact-dir:", "ansible-navigator: ansible-runner: rotate-artifacts-count:", "ansible-navigator: ansible-runner: timeout:", "ansible-navigator: app:", "ansible-navigator: ansible: cmdline:", "ansible-navigator: collection-doc-cache-path:", "ansible-navigator: execution-environment: container-engine:", "ansible-navigator: color: enable:", "ansible-navigator: editor: command:", "ansible-navigator: editor: console:", "ansible-navigator: execution-environment: enabled:", "ansible-navigator: execution-environment: image:", "ansible-navigator: execution-environment: volume-mounts:", "ansible-navigator: logging: append:", "ansible-navigator: logging: file:", "ansible-navigator: logging: level:", "ansible-navigator: mode:", "ansible-navigator: color: osc4:", "ansible-navigator: execution-environment: environment-variables: pass:", "ansible-navigator: execution-environment: pull-policy:", "ansible-navigator: execution-environment: environment-variables: set:", "ansible-navigator: ansible: config: path:", "ansible-navigator: help-config:", "ansible-navigator: help-doc:", "ansible-navigator: documentation: plugin: name:", "ansible-navigator: documentation: plugin: type:", "ansible-navigator: help-inventory:", "ansible-navigator: inventories:", "ansible-navigator: inventory-columns:", "ansible-navigator: playbook-artifact: replay:", "ansible-navigator: playbook-artifact: replay:", "ansible-navigator: help-playbook:", "ansible-navigator: inventories:", "ansible-navigator: inventory-columns:", "ansible-navigator: ansible: playbook:", "ansible-navigator: playbook-artifact: enable:", "ansible-navigator: playbook-artifact: save-as:" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/automation_content_navigator_creator_guide/assembly-settings-navigator_ansible-navigator
Chapter 31. IntegrationHealthService
Chapter 31. IntegrationHealthService 31.1. GetDeclarativeConfigs GET /v1/integrationhealth/declarativeconfigs 31.1.1. Description 31.1.2. Parameters 31.1.3. Return Type V1GetIntegrationHealthResponse 31.1.4. Content Type application/json 31.1.5. Responses Table 31.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetIntegrationHealthResponse 0 An unexpected error response. RuntimeError 31.1.6. Samples 31.1.7. Common object reference 31.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 31.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 31.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 31.1.7.3. StorageIntegrationHealth Field Name Required Nullable Type Description Format id String name String type StorageIntegrationHealthType UNKNOWN, IMAGE_INTEGRATION, NOTIFIER, BACKUP, DECLARATIVE_CONFIG, status StorageIntegrationHealthStatus UNINITIALIZED, UNHEALTHY, HEALTHY, errorMessage String lastTimestamp Date date-time 31.1.7.4. StorageIntegrationHealthStatus Enum Values UNINITIALIZED UNHEALTHY HEALTHY 31.1.7.5. StorageIntegrationHealthType Enum Values UNKNOWN IMAGE_INTEGRATION NOTIFIER BACKUP DECLARATIVE_CONFIG 31.1.7.6. V1GetIntegrationHealthResponse Field Name Required Nullable Type Description Format integrationHealth List of StorageIntegrationHealth 31.2. GetBackupPlugins GET /v1/integrationhealth/externalbackups 31.2.1. Description 31.2.2. Parameters 31.2.3. Return Type V1GetIntegrationHealthResponse 31.2.4. Content Type application/json 31.2.5. Responses Table 31.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetIntegrationHealthResponse 0 An unexpected error response. RuntimeError 31.2.6. Samples 31.2.7. Common object reference 31.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 31.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 31.2.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 31.2.7.3. StorageIntegrationHealth Field Name Required Nullable Type Description Format id String name String type StorageIntegrationHealthType UNKNOWN, IMAGE_INTEGRATION, NOTIFIER, BACKUP, DECLARATIVE_CONFIG, status StorageIntegrationHealthStatus UNINITIALIZED, UNHEALTHY, HEALTHY, errorMessage String lastTimestamp Date date-time 31.2.7.4. StorageIntegrationHealthStatus Enum Values UNINITIALIZED UNHEALTHY HEALTHY 31.2.7.5. StorageIntegrationHealthType Enum Values UNKNOWN IMAGE_INTEGRATION NOTIFIER BACKUP DECLARATIVE_CONFIG 31.2.7.6. V1GetIntegrationHealthResponse Field Name Required Nullable Type Description Format integrationHealth List of StorageIntegrationHealth 31.3. GetImageIntegrations GET /v1/integrationhealth/imageintegrations 31.3.1. Description 31.3.2. Parameters 31.3.3. Return Type V1GetIntegrationHealthResponse 31.3.4. Content Type application/json 31.3.5. Responses Table 31.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetIntegrationHealthResponse 0 An unexpected error response. RuntimeError 31.3.6. Samples 31.3.7. Common object reference 31.3.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 31.3.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 31.3.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 31.3.7.3. StorageIntegrationHealth Field Name Required Nullable Type Description Format id String name String type StorageIntegrationHealthType UNKNOWN, IMAGE_INTEGRATION, NOTIFIER, BACKUP, DECLARATIVE_CONFIG, status StorageIntegrationHealthStatus UNINITIALIZED, UNHEALTHY, HEALTHY, errorMessage String lastTimestamp Date date-time 31.3.7.4. StorageIntegrationHealthStatus Enum Values UNINITIALIZED UNHEALTHY HEALTHY 31.3.7.5. StorageIntegrationHealthType Enum Values UNKNOWN IMAGE_INTEGRATION NOTIFIER BACKUP DECLARATIVE_CONFIG 31.3.7.6. V1GetIntegrationHealthResponse Field Name Required Nullable Type Description Format integrationHealth List of StorageIntegrationHealth 31.4. GetNotifiers GET /v1/integrationhealth/notifiers 31.4.1. Description 31.4.2. Parameters 31.4.3. Return Type V1GetIntegrationHealthResponse 31.4.4. Content Type application/json 31.4.5. Responses Table 31.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetIntegrationHealthResponse 0 An unexpected error response. RuntimeError 31.4.6. Samples 31.4.7. Common object reference 31.4.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 31.4.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 31.4.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 31.4.7.3. StorageIntegrationHealth Field Name Required Nullable Type Description Format id String name String type StorageIntegrationHealthType UNKNOWN, IMAGE_INTEGRATION, NOTIFIER, BACKUP, DECLARATIVE_CONFIG, status StorageIntegrationHealthStatus UNINITIALIZED, UNHEALTHY, HEALTHY, errorMessage String lastTimestamp Date date-time 31.4.7.4. StorageIntegrationHealthStatus Enum Values UNINITIALIZED UNHEALTHY HEALTHY 31.4.7.5. StorageIntegrationHealthType Enum Values UNKNOWN IMAGE_INTEGRATION NOTIFIER BACKUP DECLARATIVE_CONFIG 31.4.7.6. V1GetIntegrationHealthResponse Field Name Required Nullable Type Description Format integrationHealth List of StorageIntegrationHealth 31.5. GetVulnDefinitionsInfo GET /v1/integrationhealth/vulndefinitions 31.5.1. Description 31.5.2. Parameters 31.5.2.1. Query Parameters Name Description Required Default Pattern component - SCANNER 31.5.3. Return Type V1VulnDefinitionsInfo 31.5.4. Content Type application/json 31.5.5. Responses Table 31.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1VulnDefinitionsInfo 0 An unexpected error response. RuntimeError 31.5.6. Samples 31.5.7. Common object reference 31.5.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 31.5.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 31.5.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 31.5.7.3. V1VulnDefinitionsInfo Field Name Required Nullable Type Description Format lastUpdatedTimestamp Date date-time
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/integrationhealthservice
Chapter 27. Storage Driver Updates
Chapter 27. Storage Driver Updates The hpsa driver has been upgraded to version 3.4.4-1-RH1. The qla2xxx driver has been upgraded to version 8.07.00.08.07.1-k1. The qla4xxx driver has been upgraded to version 5.04.00.04.07.01-k0. The qlcnic driver has been upgraded to version 5.3.61. The netxen_nic driver has been upgraded to version 4.0.82. The qlge driver has been upgraded to version 1.00.00.34. The bnx2fc driver has been upgraded to version 2.4.2. The bnx2i driver has been upgraded to version 2.7.10.1. The cnic driver has been upgraded to version 2.5.20. The bnx2x driver has been upgraded to version 1.710.51-0. The bnx2 driver has been upgraded to version 2.2.5. The megaraid_sas driver has been upgraded to version 06.805.06.01-rc1. The mpt2sas driver has been upgraded to version 18.100.00.00. The ipr driver has been upgraded to version 2.6.0. The kmod-lpfc packages have been added to Red Hat Enterprise Linux 7, which ensures greater stability when using the lpfc driver with Fibre Channel (FC) and Fibre Channel over Ethernet (FCoE) adapters. The lpfc driver has been upgraded to version 0:10.2.8021.1. The be2iscsi driver has been upgraded to version 10.4.74.0r. The nvme driver has been upgraded to version 0.9.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/ch27
Chapter 10. Kernel
Chapter 10. Kernel Reservation of memory for crashkernel no longer fails Previously, the reservation of memory for crashkernel in some cases failed with the following error message: This update fixes the step down mechanism so that the upper limit set in the KEXEC_RESERVE_UPPER_LIMIT parameter is not exceeded, which makes the reservation succeed. As a result, the memory reservation for crash kernel now proceeds as expected. (BZ#1349069) The mbind call now allocates memory on the specified NUMA node Previously, using the mbind call for allocation of memory on a Non-Uniform Memory Access (NUMA) node with particular number worked only for the very first invocation. On subsequent calls, the memory was always allocated on NUMA node 0. This update fixes the interaction of the mbind_range() function and the vma_adjust() function. As a result, mbind now allocates memory on the NUMA node with specified number in all cases. (BZ#1277241) The system no longer hangs due to the tasklist_lock variable starvation In a situation with a lot of concurrent processes taking the tasklist_lock variable for reading, the operating system sometimes became unresponsive when it was trying to take tasklist_lock for writing. This update fixes the underlying source code, so that a writer excludes the new readers to prevent the system hang. (BZ#1304864) Intel Xeon v5 no longer causes GPU to hang Previously, on GT3 and GT4 architectures, Intel Xeon v5 integrated graphics could experience problems with GPU lock-up, leading to GPU hang. This bug has been fixed. (BZ#1323945) Kernel no longer panics when loading Intel Xeon v5 integrated graphic cards When loading Intel Xeon v5 integrated graphic cards, a kernel panic sometimes occurred due to a race condition in the kernel firmware loader. This update adds a separate lock that is held throughout the life time of the firmware device, thus protecting the area where the device is registered. As a result, the kernel no longer panics in the described situation. (BZ#1309875) NFS no longer uses FS-Cache when -o fsc is not set Previously, when an NFS share was mounted, FS-Cache was always erroneously enabled even when the -o fsc option was not used in the mount command. Consequently, the cachefilesd service stored files on the NFS share, and other severe problems, such as the kernel becoming unresponsive or terminating unexpectedly, sometimes occurred. With this update, NFS no longer uses FS-Cache if -o fsc is not set. As a result, NFS now uses FS-Cache only when explicitly requested. Note that FS-Cache is provided as a Technology Preview in Red Hat Enterprise Linux 6. (BZ#1353844)
[ "Crashkernel reservation failed. Found area can not be reserved: start=0x4000000, size=0x34000000." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.9_technical_notes/bug_fixes_kernel
Chapter 18. BeanIO
Chapter 18. BeanIO Since Camel 2.10 The BeanIO Data Format uses BeanIO to handle flat payloads (such as XML, CSV, delimited, or fixed length formats). BeanIO is configured using a mapping XML file where you define the mapping from the flat format to Objects (POJOs). This mapping file is mandatory to use. 18.1. Dependencies When using beanio with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-beanio-starter</artifactId> </dependency> 18.2. Options The BeanIO dataformat supports 8 options, which are listed below. Name Default Java Type Description mapping (common) String Required The BeanIO mapping file. Is by default loaded from the classpath. You can prefix with file:, http:, or classpath: to denote from where to load the mapping file. streamName (common) String Required The name of the stream to use. ignoreUnidentifiedRecords (common) false Boolean Whether to ignore unidentified records. ignoreUnexpectedRecords (common) false Boolean Whether to ignore unexpected records. ignoreInvalidRecords (common) false Boolean Whether to ignore invalid records. encoding (advanced) String The charset to use. Is by default the JVM platform default charset. beanReaderErrorHandlerType (advanced) String To use a custom org.apache.camel.dataformat.beanio.BeanIOErrorHandler as error handler while parsing. Configure the fully qualified class name of the error handler. Notice the options ignoreUnidentifiedRecords, ignoreUnexpectedRecords, and ignoreInvalidRecords may not be in use when you use a custom error handler. unmarshalSingleObject (advanced) false Boolean This options controls whether to unmarshal as a list of objects or as a single object only. The former is the default mode, and the latter is only intended in special use-cases where beanio maps the Camel message to a single POJO bean. 18.3. Usage An example of a mapping file is here . To use the BeanIODataFormat you need to configure the data format with the mapping file, as well the name of the stream. This can be done as shown below. The streamName is employeeFile . Java XML DataFormat format = new BeanIODataFormat( "org/apache/camel/dataformat/beanio/mappings.xml", "employeeFile"); // a route which uses the bean io data format to format the CSV data // to java objects from("direct:unmarshal") .unmarshal(format) // and then split the message body, so we get a message for each row .split(body()) .to("mock:beanio-unmarshal"); // convert a list of java objects back to flat format from("direct:marshal") .marshal(format) .to("mock:beanio-marshal"); <route> <from uri="direct:unmarshal"/> <unmarshal> <beanio mapping="org/apache/camel/dataformat/beanio/mappings.xml" streamName="employeeFile"/> </unmarshal> <split> <simple>USD{body}</simple> <to uri="mock:beanio-unmarshal"/> </split> </route> <route> <from uri="direct:marshal"/> <marshal> <beanio mapping="org/apache/camel/dataformat/beanio/mappings.xml" streamName="employeeFile"/> </marshal> <to uri="mock:beanio-marshal"/> </route> To use the BeanIO data format in XML, you need to configure it using the <beanio> XML tag as shown below. The routes are similar to the example above. The first route is for transforming CSV data into a List<Employee> Java objects. Which we then split, so the mock endpoint receives a message for each row. The second route is for the reverse operation, to transform a List<Employee> into a stream of CSV data. The CSV data could, for example, be as below: Joe,Smith,Developer,75000,10012009 Jane,Doe,Architect,80000,01152008 Jon,Anderson,Manager,85000,03182007 18.4. Spring Boot Auto-Configuration The component supports 9 options, which are listed below. Name Description Default Type camel.dataformat.beanio.bean-reader-error-handler-type To use a custom org.apache.camel.dataformat.beanio.BeanIOErrorHandler as error handler while parsing. Configure the fully qualified class name of the error handler. Notice the options ignoreUnidentifiedRecords, ignoreUnexpectedRecords, and ignoreInvalidRecords may not be in use when you use a custom error handler. String camel.dataformat.beanio.enabled Whether to enable auto configuration of the beanio data format. This is enabled by default. Boolean camel.dataformat.beanio.encoding The charset to use. Is by default the JVM platform default charset. String camel.dataformat.beanio.ignore-invalid-records Whether to ignore invalid records. false Boolean camel.dataformat.beanio.ignore-unexpected-records Whether to ignore unexpected records. false Boolean camel.dataformat.beanio.ignore-unidentified-records Whether to ignore unidentified records. false Boolean camel.dataformat.beanio.mapping The BeanIO mapping file. Is by default loaded from the classpath. You can prefix with file:, http:, or classpath: to denote from where to load the mapping file. String camel.dataformat.beanio.stream-name The name of the stream to use. String camel.dataformat.beanio.unmarshal-single-object This option controls whether to unmarshal as a list of objects or as a single object only. The former is the default mode, and the latter is only intended in special use-cases where beanio maps the Camel message to a single POJO bean. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-beanio-starter</artifactId> </dependency>", "DataFormat format = new BeanIODataFormat( \"org/apache/camel/dataformat/beanio/mappings.xml\", \"employeeFile\"); // a route which uses the bean io data format to format the CSV data // to java objects from(\"direct:unmarshal\") .unmarshal(format) // and then split the message body, so we get a message for each row .split(body()) .to(\"mock:beanio-unmarshal\"); // convert a list of java objects back to flat format from(\"direct:marshal\") .marshal(format) .to(\"mock:beanio-marshal\");", "<route> <from uri=\"direct:unmarshal\"/> <unmarshal> <beanio mapping=\"org/apache/camel/dataformat/beanio/mappings.xml\" streamName=\"employeeFile\"/> </unmarshal> <split> <simple>USD{body}</simple> <to uri=\"mock:beanio-unmarshal\"/> </split> </route> <route> <from uri=\"direct:marshal\"/> <marshal> <beanio mapping=\"org/apache/camel/dataformat/beanio/mappings.xml\" streamName=\"employeeFile\"/> </marshal> <to uri=\"mock:beanio-marshal\"/> </route>", "Joe,Smith,Developer,75000,10012009 Jane,Doe,Architect,80000,01152008 Jon,Anderson,Manager,85000,03182007" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-beanio-dataformat-starter
4.4. Administrative Controls
4.4. Administrative Controls When administering a home machine, the user must perform some tasks as the root user or by acquiring effective root privileges via a setuid program, such as sudo or su . A setuid program is one that operates with the user ID ( UID ) of the program's owner rather than the user operating the program. Such programs are denoted by a lower case s in the owner section of a long format listing, as in the following example: For the system administrators of an organization, however, choices must be made as to how much administrative access users within the organization should have to their machine. Through a PAM module called pam_console.so , some activities normally reserved only for the root user, such as rebooting and mounting removable media are allowed for the first user that logs in at the physical console (see the chapter titled Pluggable Authentication Modules (PAM) in the Reference Guide for more about the pam_console.so module.) However, other important system administration tasks such as altering network settings, configuring a new mouse, or mounting network devices are not possible without administrative priveleges. As a result, system administrators must decide how much access the users on their network should receive. 4.4.1. Allowing Root Access If the users within an organization are a trusted, computer-savvy group, then allowing them root access may not be an issue. Allowing root access by users means that minor activities, like adding devices or configuring network interfaces, can be handled by the individual users, leaving system administrators free to deal with network security and other important issues. On the other hand, giving root access to individual users can lead to the following issues: Machine Misconfiguration - Users with root access can misconfigure their machines and require assistance or worse, open up security holes without knowing it. Running Insecure Services - Users with root access may run insecure servers on their machine, such as FTP or Telnet, potentially putting usernames and passwords at risk as they pass over the network in the clear. Running Email Attachments As Root - Although rare, email viruses that affect Linux do exist. The only time they are a threat, however, is when they are run by the root user.
[ "-rwsr-xr-x 1 root root 47324 May 1 08:09 /bin/su" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s1-wstation-privileges
Serverless
Serverless OpenShift Container Platform 4.15 Create and deploy serverless, event-driven applications using OpenShift Serverless Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/serverless/index
9.8. Querying a Pacemaker Cluster with SNMP (Red Hat Enterprise Linux 7.5 and later)
9.8. Querying a Pacemaker Cluster with SNMP (Red Hat Enterprise Linux 7.5 and later) As of Red Hat Enterprise Linux 7.5, you can use the pcs_snmp_agent daemon to query a Pacemaker cluster for data by means of SNMP. The pcs_snmp_agent daemon is an SNMP agent that connects to the master agent ( snmpd ) by means of agentx protocol. The pcs_snmp_agent agent does not work as a standalone agent as it only provides data to the master agent. The following procedure sets up a basic configuration for a system to use SNMP with a Pacemaker cluster. You run this procedure on each node of the cluster from which you will be using SNMP to fetch data for the cluster. Install the pcs-snmp package on each node of the cluster. This will also install the net-snmp package which provides the snmp daemon. Add the following line to the /etc/snmp/snmpd.conf configuration file to set up the snmpd daemon as master agentx . Add the following line to the /etc/snmp/snmpd.conf configuration file to enable pcs_snmp_agent in the same SNMP configuration. Start the pcs_snmp_agent service. To check the configuration, display the status of the cluster with the pcs status and then try to fetch the data from SNMP to check whether it corresponds to the output. Note that when you use SNMP to fetch data, only primitive resources are provided. The following example shows the output of a pcs status command on a running cluster with one failed action.
[ "yum install pcs-snmp", "master agentx", "view systemview included .1.3.6.1.4.1.32723.100", "systemctl start pcs_snmp_agent.service systemctl enable pcs_snmp_agent.service", "pcs status Cluster name: rhel75-cluster Stack: corosync Current DC: rhel75-node2 (version 1.1.18-5.el7-1a4ef7d180) - partition with quorum Last updated: Wed Nov 15 16:07:44 2017 Last change: Wed Nov 15 16:06:40 2017 by hacluster via cibadmin on rhel75-node1 2 nodes configured 14 resources configured (1 DISABLED) Online: [ rhel75-node1 rhel75-node2 ] Full list of resources: fencing (stonith:fence_xvm): Started rhel75-node1 dummy5 (ocf::pacemaker:Dummy): Stopped (disabled) dummy6 (ocf::pacemaker:Dummy): Stopped dummy7 (ocf::pacemaker:Dummy): Started rhel75-node2 dummy8 (ocf::pacemaker:Dummy): Started rhel75-node1 dummy9 (ocf::pacemaker:Dummy): Started rhel75-node2 Resource Group: group1 dummy1 (ocf::pacemaker:Dummy): Started rhel75-node1 dummy10 (ocf::pacemaker:Dummy): Started rhel75-node1 Clone Set: group2-clone [group2] Started: [ rhel75-node1 rhel75-node2 ] Clone Set: dummy4-clone [dummy4] Started: [ rhel75-node1 rhel75-node2 ] Failed Actions: * dummy6_start_0 on rhel75-node1 'unknown error' (1): call=87, status=complete, exitreason='', last-rc-change='Wed Nov 15 16:05:55 2017', queued=0ms, exec=20ms", "snmpwalk -v 2c -c public localhost PACEMAKER-PCS-V1-MIB::pcmkPcsV1Cluster PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterName.0 = STRING: \"rhel75-cluster\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterQuorate.0 = INTEGER: 1 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterNodesNum.0 = INTEGER: 2 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterNodesNames.0 = STRING: \"rhel75-node1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterNodesNames.1 = STRING: \"rhel75-node2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterCorosyncNodesOnlineNum.0 = INTEGER: 2 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterCorosyncNodesOnlineNames.0 = STRING: \"rhel75-node1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterCorosyncNodesOnlineNames.1 = STRING: \"rhel75-node2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterCorosyncNodesOfflineNum.0 = INTEGER: 0 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesOnlineNum.0 = INTEGER: 2 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesOnlineNames.0 = STRING: \"rhel75-node1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesOnlineNames.1 = STRING: \"rhel75-node2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesStandbyNum.0 = INTEGER: 0 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterPcmkNodesOfflineNum.0 = INTEGER: 0 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesNum.0 = INTEGER: 11 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.0 = STRING: \"fencing\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.1 = STRING: \"dummy5\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.2 = STRING: \"dummy6\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.3 = STRING: \"dummy7\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.4 = STRING: \"dummy8\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.5 = STRING: \"dummy9\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.6 = STRING: \"dummy1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.7 = STRING: \"dummy10\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.8 = STRING: \"dummy2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.9 = STRING: \"dummy3\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterAllResourcesIds.10 = STRING: \"dummy4\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesNum.0 = INTEGER: 9 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.0 = STRING: \"fencing\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.1 = STRING: \"dummy7\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.2 = STRING: \"dummy8\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.3 = STRING: \"dummy9\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.4 = STRING: \"dummy1\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.5 = STRING: \"dummy10\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.6 = STRING: \"dummy2\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.7 = STRING: \"dummy3\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterRunningResourcesIds.8 = STRING: \"dummy4\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterStoppedResroucesNum.0 = INTEGER: 1 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterStoppedResroucesIds.0 = STRING: \"dummy5\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterFailedResourcesNum.0 = INTEGER: 1 PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterFailedResourcesIds.0 = STRING: \"dummy6\" PACEMAKER-PCS-V1-MIB::pcmkPcsV1ClusterFailedResourcesIds.0 = No more variables left in this MIB View (It is past the end of the MIB tree)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-snmpandpacemaker-haar
External Load Balancing for the Overcloud
External Load Balancing for the Overcloud Red Hat OpenStack Platform 17.0 Configuring a Red Hat OpenStack Platform environment to use an external load balancer OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/external_load_balancing_for_the_overcloud/index
Chapter 3. Usage
Chapter 3. Usage This chapter describes the necessary steps for rebuilding and using Red Hat Software Collections 3.1, and deploying applications that use Red Hat Software Collections. 3.1. Using Red Hat Software Collections 3.1.1. Running an Executable from a Software Collection To run an executable from a particular Software Collection, type the following command at a shell prompt: scl enable software_collection ... ' command ...' Or, alternatively, use the following command: scl enable software_collection ... -- command ... Replace software_collection with a space-separated list of Software Collections you want to use and command with the command you want to run. For example, to execute a Perl program stored in a file named hello.pl with the Perl interpreter from the perl516 Software Collection, type: You can execute any command using the scl utility, causing it to be run with the executables from a selected Software Collection in preference to their possible Red Hat Enterprise Linux system equivalents. For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.1 Components" . 3.1.2. Running a Shell Session with a Software Collection as Default To start a new shell session with executables from a selected Software Collection in preference to their Red Hat Enterprise Linux equivalents, type the following at a shell prompt: scl enable software_collection ... bash Replace software_collection with a space-separated list of Software Collections you want to use. For example, to start a new shell session with the python27 and rh-postgresql95 Software Collections as default, type: The list of Software Collections that are enabled in the current session is stored in the USDX_SCLS environment variable, for instance: For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.1 Components" . 3.1.3. Running a System Service from a Software Collection Software Collections that include system services install corresponding init scripts in the /etc/rc.d/init.d/ directory. To start such a service in the current session, type the following at a shell prompt as root : service software_collection - service_name start Replace software_collection with the name of the Software Collection and service_name with the name of the service you want to start. To configure this service to start automatically at boot time, type the following command as root : chkconfig software_collection - service_name on For example, to start the postgresql service from the rh-postgresql95 Software Collection and enable it in runlevels 2, 3, 4, and 5, type as root : For more information on how to manage system services in Red Hat Enterprise Linux 6, refer to the Red Hat Enterprise Linux 6 Deployment Guide . For a complete list of Software Collections that are distributed with Red Hat Software Collections, see Table 1.1, "Red Hat Software Collections 3.1 Components" . 3.2. Accessing a Manual Page from a Software Collection Every Software Collection contains a general manual page that describes the content of this component. Each manual page has the same name as the component and it is located in the /opt/rh directory. To read a manual page for a Software Collection, type the following command: scl enable software_collection 'man software_collection ' Replace software_collection with the particular Red Hat Software Collections component. For example, to display the manual page for rh-mariadb101 , type: 3.3. Deploying Applications That Use Red Hat Software Collections In general, you can use one of the following two approaches to deploy an application that depends on a component from Red Hat Software Collections in production: Install all required Software Collections and packages manually and then deploy your application, or Create a new Software Collection for your application and specify all required Software Collections and other packages as dependencies. For more information on how to manually install individual Red Hat Software Collections components, see Section 2.2, "Installing Red Hat Software Collections" . For further details on how to use Red Hat Software Collections, see Section 3.1, "Using Red Hat Software Collections" . For a detailed explanation of how to create a custom Software Collection or extend an existing one, read the Red Hat Software Collections Packaging Guide . 3.4. Red Hat Software Collections Container Images Container images based on Red Hat Software Collections include applications, daemons, and databases. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. For information about their usage, see Using Red Hat Software Collections 3 Container Images . For details regarding container images based on Red Hat Software Collections versions 2.4 and earlier, see Using Red Hat Software Collections 2 Container Images . The following container images are available with Red Hat Software Collections 3.1: rhscl/devtoolset-7-toolchain-rhel7 rhscl/devtoolset-7-perftools-rhel7 rhscl/httpd-24-rhel7 rhscl/mongodb-36-rhel7 rhscl/perl-526-rhel7 rhscl/php-70-rhel7 rhscl/postgresql-10-rhel7 rhscl/ruby-25-rhel7 rhscl/varnish-5-rhel7 The following container images are based on Red Hat Software Collections 3.0: rhscl/mariadb-102-rhel7 rhscl/mongodb-34-rhel7 rhscl/nginx-112-rhel7 rhscl/nodejs-8-rhel7 rhscl/php-71-rhel7 rhscl/postgresql-96-rhel7 rhscl/python-36-rhel7 The following container images are based on Red Hat Software Collections 2.4: rhscl/devtoolset-6-toolchain-rhel7 rhscl/devtoolset-6-perftools-rhel7 rhscl/nginx-110-rhel7 rhscl/nodejs-6-rhel7 rhscl/python-27-rhel7 rhscl/ruby-24-rhel7 rhscl/ror-50-rhel7 rhscl/thermostat-16-agent-rhel7 (EOL) rhscl/thermostat-16-storage-rhel7 (EOL) The following container images are based on Red Hat Software Collections 2.3: rhscl/mysql-57-rhel7 rhscl/perl-524-rhel7 rhscl/redis-32-rhel7 rhscl/mongodb-32-rhel7 rhscl/php-56-rhel7 rhscl/python-35-rhel7 rhscl/ruby-23-rhel7 The following container images are based on Red Hat Software Collections 2.2: rhscl/devtoolset-4-toolchain-rhel7 rhscl/devtoolset-4-perftools-rhel7 rhscl/mariadb-101-rhel7 rhscl/nginx-18-rhel7 rhscl/nodejs-4-rhel7 rhscl/postgresql-95-rhel7 rhscl/ror-42-rhel7 rhscl/thermostat-1-agent-rhel7 (EOL) rhscl/varnish-4-rhel7 The following container images are based on Red Hat Software Collections 2.0: rhscl/mariadb-100-rhel7 rhscl/mongodb-26-rhel7 rhscl/mysql-56-rhel7 rhscl/nginx-16-rhel7 (EOL) rhscl/passenger-40-rhel7 rhscl/perl-520-rhel7 rhscl/postgresql-94-rhel7 rhscl/python-34-rhel7 rhscl/ror-41-rhel7 rhscl/ruby-22-rhel7 rhscl/s2i-base-rhel7 Images marked as End of Life (EOL) are no longer supported.
[ "~]USD scl enable rh-perl524 'perl hello.pl' Hello, World!", "~]USD scl enable python27 rh-postgresql95 bash", "~]USD echo USDX_SCLS python27 rh-postgresql95", "~]# service rh-postgresql95-postgresql start Starting rh-postgresql95-postgresql service: [ OK ] ~]# chkconfig rh-postgresql95-postgresql on", "~]USD scl enable rh-mariadb101 \"man rh-mariadb101\"" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.1_release_notes/chap-Usage
17.4.7. Apache HTTP Server or Sendmail Stops Responding During Startup
17.4.7. Apache HTTP Server or Sendmail Stops Responding During Startup If Apache HTTP Server ( httpd ) or Sendmail stops responding during startup, make sure the following line is in the /etc/hosts file:
[ "127.0.0.1 localhost.localdomain localhost" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch17s04s07
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_content/providing-feedback-on-red-hat-documentation_content-management
Chapter 57. overcloud
Chapter 57. overcloud This chapter describes the commands under the overcloud command. 57.1. overcloud admin authorize Deploy the ssh keys needed by Mistral. Usage: Table 57.1. Command arguments Value Summary -h, --help Show this help message and exit --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT This option no longer has any effect. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for the ssh port to become active. --static-inventory STATIC_INVENTORY Path to an existing ansible inventory to use. if not specified, one will be generated in ~/tripleo-ansible- inventory.yaml --limit LIMIT_HOSTS Define which hosts or group of hosts to run the admin Authorize tasks against. 57.2. overcloud backup snapshot Takes and LVM snapshot ignoring all the rest of the parameters passed. To be able to take a snapshot, the following conditions must be met: - The disk must be configured to use LVM - There must be an lv called lv_snapshot - lv_snapshot must be 8GB or more This operation will destroy the lv_snapshot volume and replace it with snapshots of the disks. Usage: Table 57.2. Command arguments Value Summary --inventory INVENTORY Tripleo inventory file generated with tripleo-ansible- inventory command. Defaults to: /root/config- download/overcloud/tripleo-ansible-inventory.yaml --remove Removes all the snapshot volumes that were created. --revert Reverts all the disks to the moment when the snapshot was created. --extra-vars EXTRA_VARS Set additional variables as dict or as an absolute path of a JSON or YAML file type. i.e. --extra-vars {"key": "val", "key2": "val2"} i.e. --extra-vars /path/to/my_vars.yaml i.e. --extra-vars /path/to/my_vars.json. For more information about the variables that can be passed, visit: https://opendev.org/openstack/tripleo-ansible/src/bran ch/master/tripleo_ansible/roles/backup_and_restore/def aults/main.yml. 57.3. overcloud backup Backup the Overcloud Usage: Table 57.3. Command arguments Value Summary --init [INIT] Initialize environment for backup, using rear , nfs or ironic as args which will check for package install and configured ReaR or NFS server. Defaults to: rear. i.e. --init rear. WARNING: This flag will be deprecated and replaced by --setup-rear , --setup- nfs and --setup-ironic . --setup-nfs Setup the nfs server on the backup node which will install required packages and configuration on the host BackupNode in the ansible inventory. --setup-rear Setup rear on the overcloud controller hosts which will install and configure ReaR. --setup-ironic Setup rear on the overcloud controller hosts which will install and configure ReaR with ironic --cron Sets up a new cron job that by default will execute a weekly backup at Sundays midnight, but that can be customized by using the tripleo_backup_and_restore_cron extra-var. --inventory INVENTORY Tripleo inventory file generated with tripleo-ansible- inventory command. Defaults to: /root/config- download/overcloud/tripleo-ansible-inventory.yaml --storage-ip STORAGE_IP Storage ip is an optional parameter which allows for an ip of a storage server to be specified, overriding the default undercloud. WARNING: This flag will be deprecated in favor of --extra-vars which will allow to pass this and other variables. --extra-vars EXTRA_VARS Set additional variables as dict or as an absolute path of a JSON or YAML file type. i.e. --extra-vars {"key": "val", "key2": "val2"} i.e. --extra-vars /path/to/my_vars.yaml i.e. --extra-vars /path/to/my_vars.json. For more information about the variables that can be passed, visit: https://opendev.org/openstack/tripleo-ansible/src/bran ch/master/tripleo_ansible/roles/backup_and_restore/def aults/main.yml. 57.4. overcloud cell export Export cell information used as import of another cell Usage: Table 57.4. Command arguments Value Summary -h, --help Show this help message and exit --control-plane-stack <control plane stack> Name of the environment main heat stack to export information from. (default=Env: OVERCLOUD_STACK_NAME) --cell-stack <cell stack>, -e <cell stack> Name of the controller cell heat stack to export information from. Used in case of: control plane stack cell controller stack multiple compute stacks --output-file <output file>, -o <output file> Name of the output file for the cell data export. it will default to "<name>.yaml" --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files are stored. Defaults to "USDHOME/overcloud-deploy/<stack>" --config-download-dir CONFIG_DOWNLOAD_DIR Directory to search for config-download export data. Defaults to USDHOME/overcloud-deploy/<stack>/config- download --force-overwrite, -f Overwrite output file if it exists. 57.5. overcloud ceph deploy Usage: Table 57.5. Positional arguments Value Summary <deployed_baremetal.yaml> Path to the environment file output from "openstack overcloud node provision". This argument may be excluded only if --ceph-spec is used. Table 57.6. Command arguments Value Summary -h, --help Show this help message and exit -o <deployed_ceph.yaml>, --output <deployed_ceph.yaml> The path to the output environment file describing the Ceph deployment to pass to the overcloud deployment. -y, --yes Skip yes/no prompt before overwriting an existing <deployed_ceph.yaml> output file (assume yes). --skip-user-create Do not create the cephadm ssh user. this user is necessary to deploy but may be created in a separate step via openstack overcloud ceph user enable . --skip-hosts-config Do not update /etc/hosts on deployed servers. by default this is configured so overcloud nodes can reach each other and the undercloud by name. --skip-container-registry-config Do not update /etc/containers/registries.conf on deployed servers. By default this is configured so overcloud nodes can pull containers from the undercloud registry. --skip-ntp Do not install/enable ntp chronyd service. by default time synchronization service chronyd is installed and enabled later by tripleo. --cephadm-ssh-user CEPHADM_SSH_USER Name of the ssh user used by cephadm. warning: if this option is used, it must be used consistently for every openstack overcloud ceph call. Defaults to ceph- admin . (default=Env: CEPHADM_SSH_USER) --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files will be stored. Defaults to "USDHOME/overcloud-deploy/<stack>" --roles-data ROLES_DATA Path to an alternative roles_data.yaml. used to decide which node gets which Ceph mon, mgr, or osd service based on the node's role in <deployed_baremetal.yaml>. --network-data NETWORK_DATA Path to an alternative network_data.yaml. used to define Ceph public_network and cluster_network. This file is searched for networks with name_lower values of storage and storage_mgmt. If none found, then search repeats but with service_net_map_replace in place of name_lower. Use --public-network-name or --cluster-network-name options to override name of the searched for network from storage or storage_mgmt to a customized name. If network_data has no storage networks, both default to ctlplane. If found network has >1 subnet, they are all combined (for routed traffic). If a network has ipv6 true, then the ipv6_subnet is retrieved instead of the ip_subnet, and the Ceph global ms_bind_ipv4 is set false and the ms_bind_ipv6 is set true. Use --config to override these defaults if desired. --public-network-name PUBLIC_NETWORK_NAME Name of the network defined in network_data.yaml which should be used for the Ceph public_network. Defaults to storage . --cluster-network-name CLUSTER_NETWORK_NAME Name of the network defined in network_data.yaml which should be used for the Ceph cluster_network. Defaults to storage_mgmt . --cluster CLUSTER Name of the ceph cluster. if set to foo , then the files /etc/ceph/<FSID>/foo.conf and /etc/ceph/<FSID>/foo.client.admin.keyring will be created. Otherwise these files will use the name ceph . Changing this means changing command line calls too, e.g. ceph health will become ceph --cluster foo health unless export CEPH_ARGS= -- cluster foo is used. --mon-ip MON_IP Ip address of the first ceph monitor. if not set, an IP from the Ceph public_network of a server with the mon label from the Ceph spec is used. IP must already be active on server. --config CONFIG Path to an existing ceph.conf with settings to be assimilated by the new cluster via cephadm bootstrap --config --cephadm-extra-args CEPHADM_EXTRA_ARGS String of extra parameters to pass cephadm. e.g. if --cephadm-extra-args --log-to-file --skip-prepare- host , then cephadm boostrap will use those options. Warning: requires --force as not all possible options ensure a functional deployment. --force Run command regardless of consequences. --ansible-extra-vars ANSIBLE_EXTRA_VARS Path to an existing ansible vars file which can override any variable in tripleo-ansible. If -- ansible-extra-vars vars.yaml is passed, then ansible-playbook -e @vars.yaml ... is used to call tripleo-ansible Ceph roles. Warning: requires --force as not all options ensure a functional deployment. --ceph-client-username CEPH_CLIENT_USERNAME Name of the cephx user. e.g. if openstack is used, then ceph auth get client.openstack will return a working user with key and capabilities on the deployed Ceph cluster. Ignored unless tripleo_cephadm_pools is set via --ansible-extra-vars. If this parameter is not set and tripleo_cephadm_keys is set via --ansible- extra-vars, then openstack will be used. Used to set CephClientUserName in --output. --ceph-client-key CEPH_CLIENT_KEY Value of the cephx key. e.g. AQC+vYNXgDAgAhAAc8UoYt+OTz5uhV7ItLdwUw== . Ignored unless tripleo_cephadm_pools is set via --ansible- extra-vars. If this parameter is not set and tripleo_cephadm_keys is set via --ansible-extra-vars, then a random key will be generated. Used to set CephClientKey in --output. --skip-cephx-keys Do not create cephx keys even if tripleo_cephadm_pools is set via --ansible-extra-vars. If this option is used, then even the defaults of --ceph-client-key and --ceph-client-username are ignored, but the pools defined via --ansible-extra-vars are still created. --single-host-defaults Adjust configuration defaults to suit a single-host Ceph cluster. --ntp-server NTP_SERVER Ntp servers to be used while configuring chronyd service.e.g. --ntp-server 0.pool.ntp.org,1.pool.ntp.org,2.pool.ntp.org --ntp-heat-env-file NTP_HEAT_ENV_FILE Path to existing heat environment file with ntp servers to be used while configuring chronyd service.NTP servers are extracted from NtpServer key --ceph-spec CEPH_SPEC Path to an existing ceph spec file. if not provided a spec will be generated automatically based on --roles- data and <deployed_baremetal.yaml>. The <deployed_baremetal.yaml> parameter is optional only if --ceph-spec is used. --osd-spec OSD_SPEC Path to an existing osd spec file. mutually exclusive with --ceph-spec. If the Ceph spec file is generated automatically, then the OSD spec in the Ceph spec file defaults to {data_devices: {all: true}} for all service_type osd. Use --osd-spec to override the data_devices value inside the Ceph spec file. --crush-hierarchy CRUSH_HIERARCHY Path to an existing crush hierarchy spec file. --tld TLD Postfix added to the hostname to represent canonical hostname --standalone Use single host ansible inventory. used only for development or testing environments. --container-image-prepare CONTAINER_IMAGE_PREPARE Path to an alternative container_image_prepare_defaults.yaml. Used to control which Ceph container is pulled by cephadm via the ceph_namespace, ceph_image, and ceph_tag variables in addition to registry authentication via ContainerImageRegistryCredentials. --cephadm-default-container Use the default container defined in cephadm instead of container_image_prepare_defaults.yaml. If this is used, cephadm bootstrap is not passed the --image parameter. Table 57.7. container-image-prepare overrides Value Summary The following options may be used to override individual values set via- container-image-prepare. If the example variables below were set theimage would be concatenated into quay.io/ceph/ceph:latest and a customregistry login would be used.--container-namespace CONTAINER_NAMESPACE E.g. quay.io/ceph --container-image CONTAINER_IMAGE E.g. ceph --container-tag CONTAINER_TAG E.g. latest --registry-url REGISTRY_URL- registry-username REGISTRY_USERNAME- registry-password REGISTRY_PASSWORD None 57.6. overcloud ceph spec Usage: Table 57.8. Positional arguments Value Summary <deployed_baremetal.yaml> Path to the environment file output from "openstack overcloud node provision". This argument may be excluded only if --standalone is used. Table 57.9. Command arguments Value Summary -h, --help Show this help message and exit -o <ceph_spec.yaml>, --output <ceph_spec.yaml> The path to the output cephadm spec file to pass to the "openstack overcloud ceph deploy --ceph-spec <ceph_spec.yaml>" command. -y, --yes Skip yes/no prompt before overwriting an existing <ceph_spec.yaml> output file (assume yes). --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files will be stored. Defaults to "USDHOME/overcloud-deploy/<stack>" --roles-data ROLES_DATA Path to an alternative roles_data.yaml. used to decide which node gets which Ceph mon, mgr, or osd service based on the node's role in <deployed_baremetal.yaml>. --mon-ip MON_IP Ip address of the first ceph monitor. only available with --standalone. --standalone Create a spec file for a standalone deployment. used for single server development or testing environments. --tld TLD Postfix added to the hostname to represent canonical hostname --osd-spec OSD_SPEC Path to an existing osd spec file. when the ceph spec file is generated its OSD spec defaults to {data_devices: {all: true}} for all service_type osd. Use --osd-spec to override the data_devices value inside the Ceph spec file. --crush-hierarchy CRUSH_HIERARCHY Path to an existing crush hierarchy spec file. 57.7. overcloud ceph user disable Usage: Table 57.10. Positional arguments Value Summary <ceph_spec.yaml> Path to an existing ceph spec file which describes the Ceph cluster where the cephadm SSH user will have their public and private keys removed and cephadm will be disabled. Spec file is necessary to determine which nodes to modify. WARNING: Ceph cluster administration or modification will no longer function. Table 57.11. Command arguments Value Summary -h, --help Show this help message and exit -y, --yes Skip yes/no prompt before disabling cephadm and its SSH user. (assume yes). --cephadm-ssh-user CEPHADM_SSH_USER Name of the ssh user used by cephadm. warning: if this option is used, it must be used consistently for every openstack overcloud ceph call. Defaults to ceph- admin . (default=Env: CEPHADM_SSH_USER) --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files will be stored. Defaults to "USDHOME/overcloud-deploy/<stack>" --standalone Use single host ansible inventory. used only for development or testing environments. Table 57.12. required named arguments Value Summary --fsid <FSID> The fsid of the ceph cluster to be disabled. required for disable option. 57.8. overcloud ceph user enable Usage: Table 57.13. Positional arguments Value Summary <ceph_spec.yaml> Path to an existing ceph spec file which describes the Ceph cluster where the cephadm SSH user will be created (if necessary) and have their public and private keys installed. Spec file is necessary to determine which nodes to modify and if a public or private key is required. Table 57.14. Command arguments Value Summary -h, --help Show this help message and exit --fsid <FSID> The fsid of the ceph cluster to be (re-)enabled. if the user disable option has been used, the FSID may be passed to the user enable option so that cephadm will be re-enabled for the Ceph cluster idenified by the FSID. --standalone Use single host ansible inventory. used only for development or testing environments. --cephadm-ssh-user CEPHADM_SSH_USER Name of the ssh user used by cephadm. warning: if this option is used, it must be used consistently for every openstack overcloud ceph call. Defaults to ceph- admin . (default=Env: CEPHADM_SSH_USER) --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files will be stored. Defaults to "USDHOME/overcloud-deploy/<stack>" 57.9. overcloud container image build Build overcloud container images with kolla-build. Usage: Table 57.15. Command arguments Value Summary -h, --help Show this help message and exit --config-file <yaml config file> Yaml config file specifying the images to build. may be specified multiple times. Order is preserved, and later files will override some options in files. Other options will append. If not specified, the default set of containers will be built. --kolla-config-file <config file> Path to a kolla config file to use. multiple config files can be specified, with values in later files taking precedence. By default, tripleo kolla conf file /usr/share/tripleo-common/container- images/tripleo_kolla_config_overrides.conf is added. --list-images Show the images which would be built instead of building them. --list-dependencies Show the image build dependencies instead of building them. --exclude <container-name> Name of a container to match against the list of containers to be built to skip. Can be specified multiple times. --use-buildah Use buildah instead of docker to build the images with Kolla. --work-dir <container builds directory> Tripleo container builds directory, storing configs and logs for each image and its dependencies. --build-timeout <build timeout in seconds> Build timeout in seconds. 57.10. overcloud container image prepare Generate files defining the images, tags and registry. Usage: Table 57.16. Command arguments Value Summary -h, --help Show this help message and exit --template-file <yaml template file> Yaml template file which the images config file will be built from. Default: /usr/share/tripleo-common/container- images/tripleo_containers.yaml.j2 --push-destination <location> Location of image registry to push images to. if specified, a push_destination will be set for every image entry. --tag <tag> Override the default tag substitution. if --tag-from- label is specified, start discovery with this tag. Default: 17.1 --tag-from-label <image label> Use the value of the specified label(s) to discover the tag. Labels can be combined in a template format, for example: {version}-{release} --namespace <namespace> Override the default namespace substitution. Default: registry.redhat.io/rhosp-rhel9 --prefix <prefix> Override the default name prefix substitution. Default: openstack- --suffix <suffix> Override the default name suffix substitution. Default: --set <variable=value> Set the value of a variable in the template, even if it has no dedicated argument such as "--suffix". --exclude <regex> Pattern to match against resulting imagename entries to exclude from the final output. Can be specified multiple times. --include <regex> Pattern to match against resulting imagename entries to include in final output. Can be specified multiple times, entries not matching any --include will be excluded. --exclude is ignored if --include is used. --output-images-file <file path> File to write resulting image entries to, as well as stdout. Any existing file will be overwritten. --environment-file <file path>, -e <file path> Environment files specifying which services are containerized. Entries will be filtered to only contain images used by containerized services. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the update command. Entries will be filtered to only contain images used by containerized services. Can be specified more than once. Files in directories are loaded in ascending sort order. --output-env-file <file path> File to write heat environment file which specifies all image parameters. Any existing file will be overwritten. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the t-h-t templates directory used for deployment. May be an absolute path or the path relative to the templates dir. --modify-role MODIFY_ROLE Name of ansible role to run between every image upload pull and push. --modify-vars MODIFY_VARS Ansible variable file containing variables to use when invoking the role --modify-role. 57.11. overcloud container image tag discover Discover the versioned tag for an image. Usage: Table 57.17. Command arguments Value Summary -h, --help Show this help message and exit --image <container image> Fully qualified name of the image to discover the tag for (Including registry and stable tag). --tag-from-label <image label> Use the value of the specified label(s) to discover the tag. Labels can be combined in a template format, for example: {version}-{release} 57.12. overcloud container image upload Push overcloud container images to registries. Usage: Table 57.18. Command arguments Value Summary -h, --help Show this help message and exit --config-file <yaml config file> Yaml config file specifying the image build. may be specified multiple times. Order is preserved, and later files will override some options in files. Other options will append. --cleanup <full, partial, none> Cleanup behavior for local images left after upload. The default full will attempt to delete all local images. partial will leave images required for deployment on this host. none will do no cleanup. 57.13. overcloud credentials Create the overcloudrc files Usage: Table 57.19. Positional arguments Value Summary stack The name of the stack you want to create rc files for. Table 57.20. Command arguments Value Summary -h, --help Show this help message and exit --directory [DIRECTORY] The directory to create the rc files. defaults to the current directory. --working-dir WORKING_DIR The working directory that contains the input, output, and generated files for the deployment. Defaults to "USDHOME/overcloud-deploy/<stack>" 57.14. overcloud delete Delete overcloud stack and plan Usage: Table 57.21. Positional arguments Value Summary stack Name or id of heat stack to delete(default=env: OVERCLOUD_STACK_NAME) Table 57.22. Command arguments Value Summary -h, --help Show this help message and exit -y, --yes Skip yes/no prompt (assume yes). -s, --skip-ipa-cleanup Skip removing overcloud hosts, services, and dns records from FreeIPA. This is particularly relevant for deployments using certificates from FreeIPA for TLS. By default, overcloud hosts, services, and DNS records will be removed from FreeIPA before deleting the overcloud. Using this option might require you to manually cleanup FreeIPA later. -b <baremetal_deployment.yaml>, --baremetal-deployment <baremetal_deployment.yaml> Configuration file describing the baremetal deployment --networks-file <network_data.yaml> Configuration file describing the network deployment to enable unprovisioning of networks. --network-ports Enable unprovisioning of network ports --heat-type {installed,pod,container,native} The type of heat process that was used to executethe deployment. pod (Default): Use an ephemeral Heat pod. installed: Use the system installed Heat. container: Use an ephemeral Heat container. native: Use an ephemeral Heat process. 57.15. overcloud deploy Deploy Overcloud Usage: Table 57.23. Command arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT This option no longer has any effect. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data_default.yaml in the --templates directory --vip-file VIP_FILE Configuration file describing the network virtual ips. --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Deprecated: only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. Not supported anymore. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Deprecated: use pre-provisioned overcloud nodes.now the default and this CLI option has no effect. --provision-nodes Provision overcloud nodes with heat. --config-download Deprecated: run deployment via config-download mechanism. This is now the default, and this CLI options has no effect. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and download the config. No software configuration, setup, or any changes will be applied to overcloud nodes. --config-download-only Disable the stack create and setup, and only run the config-download workflow to apply the software configuration. Requires that config-download setup was previously completed, either with --stack-only and --setup-only or a full deployment --setup-only Disable the stack and config-download workflow to apply the software configuration and only run the setup to enable ssh connectivity. --config-dir CONFIG_DIR The directory where the configuration files will be pushed --config-type CONFIG_TYPE Only used when "--setup-only" is invoked. type of object config to be extract from the deployment, defaults to all keys available --no-preserve-config Only used when "--setup-only" is invoked. if specified, will delete and recreate the --config-dir if it already exists. Default is to use the existing dir location and overwrite files. Files in --config- dir not from the stack will be preserved by default. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. When not specified, <working-dir>/config- download will be used. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b [<baremetal_deployment.yaml>], --baremetal-deployment [<baremetal_deployment.yaml>] Deploy baremetal nodes, network and virtual ip addresses as defined in baremetal_deployment.yaml along with overcloud. If no baremetal_deployment YAML file is given, the tripleo-<stack_name>-baremetal- deployment.yaml file in the working-dir will be used. --network-config Apply network config to provisioned nodes. (implies " --network-ports") --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --tags TAGS A list of tags to use when running the config- download ansible-playbook command. --skip-tags SKIP_TAGS A list of tags to skip when running the config- download ansible-playbook command. --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. --disable-container-prepare Disable the container preparation actions to prevent container tags from being updated and new containers from being fetched. If you skip this but do not have the container parameters configured, the deployment action may fail. --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files will be stored. Defaults to "USDHOME/overcloud-deploy/<stack>" --heat-type {pod,container,native} The type of heat process to use to execute the deployment. pod (Default): Use an ephemeral Heat pod. container (Experimental): Use an ephemeral Heat container. native (Experimental): Use an ephemeral Heat process. --heat-container-api-image <HEAT_CONTAINER_API_IMAGE> The container image to use when launching the heat-api process. Only used when --heat-type=pod. Defaults to: localhost/tripleo/openstack-heat-api:ephemeral --heat-container-engine-image <HEAT_CONTAINER_ENGINE_IMAGE> The container image to use when launching the heat- engine process. Only used when --heat-type=pod. Defaults to: localhost/tripleo/openstack-heat- engine:ephemeral --rm-heat If specified and --heat-type is container or pod any existing container or pod of a ephemeral Heat process will be deleted first. Ignored if --heat-type is native. --skip-heat-pull When --heat-type is pod or container, assume the container image has already been pulled --disable-protected-resource-types Disable protected resource type overrides. resources types that are used internally are protected, and cannot be overridden in the user environment. Setting this argument disables the protection, allowing the protected resource types to be override in the user environment. -y, --yes Use -y or --yes to skip any confirmation required before the deploy operation. Use this with caution! --allow-deprecated-network-data Set this to allow using deprecated network data yaml definition schema. 57.16. overcloud export ceph Export Ceph information used as import of another stack Export Ceph information from one or more stacks to be used as input of another stack. Creates a valid YAML file with the CephExternalMultiConfig parameter populated. Usage: Table 57.24. Command arguments Value Summary -h, --help Show this help message and exit --stack <stack> Name of the overcloud stack(s) to export ceph information from. If a comma delimited list of stacks is passed, Ceph information for all stacks will be exported into a single file. (default=Env: OVERCLOUD_STACK_NAME) --cephx-key-client-name <cephx>, -k <cephx> Name of the cephx client key to export. (default=openstack) --output-file <output file>, -o <output file> Name of the output file for the ceph data export. Defaults to "ceph-export-<STACK>.yaml" if one stack is provided. Defaults to "ceph-export-<N>-stacks.yaml" if N stacks are provided. --force-overwrite, -f Overwrite output file if it exists. --config-download-dir CONFIG_DOWNLOAD_DIR Directory to search for config-download export data. Defaults to USDHOME/overcloud-deploy/<stack>/config- download 57.17. overcloud export Export stack information used as import of another stack Usage: Table 57.25. Command arguments Value Summary -h, --help Show this help message and exit --stack <stack> Name of the environment main heat stack to export information from. (default=overcloud) --output-file <output file>, -o <output file> Name of the output file for the stack data export. it will default to "<name>.yaml" --force-overwrite, -f Overwrite output file if it exists. --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files are stored. Defaults to "USDHOME/overcloud-deploy/<stack>" --config-download-dir CONFIG_DOWNLOAD_DIR Directory to search for config-download export data. Defaults to USDHOME/overcloud-deploy/<stack>/config- download --no-password-excludes Dont exclude certain passwords from the password export. Defaults to False in that some passwords will be excluded that are not typically necessary. 57.18. overcloud external-update run Run external minor update Ansible playbook This will run the external minor update Ansible playbook, executing tasks from the undercloud. The update playbooks are made available after completion of the overcloud update prepare command. Usage: Table 57.26. Command arguments Value Summary -h, --help Show this help message and exit --static-inventory STATIC_INVENTORY Deprecated: tripleo-ansible-inventory.yaml in working dir will be used. --ssh-user SSH_USER Deprecated: only tripleo-admin should be used as ssh user. --tags TAGS A string specifying the tag or comma separated list of tags to be passed as --tags to ansible-playbook. --skip-tags SKIP_TAGS A string specifying the tag or comma separated list of tags to be passed as --skip-tags to ansible-playbook. --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) -e EXTRA_VARS, --extra-vars EXTRA_VARS Set additional variables as key=value or yaml/json -y, --yes Use -y or --yes to skip the confirmation required before any upgrade operation. Use this with caution! --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. --refresh Deprecated: refresh the config-download playbooks.use overcloud update prepare instead to refresh playbooks. 57.19. overcloud external-upgrade run Run external major upgrade Ansible playbook This will run the external major upgrade Ansible playbook, executing tasks from the undercloud. The upgrade playbooks are made available after completion of the overcloud upgrade prepare command. Usage: Table 57.27. Command arguments Value Summary -h, --help Show this help message and exit --static-inventory STATIC_INVENTORY Deprecated: tripleo-ansible-inventory.yaml in working dir will be used. --ssh-user SSH_USER Deprecated: only tripleo-admin should be used as ssh user. --tags TAGS A string specifying the tag or comma separated list of tags to be passed as --tags to ansible-playbook. --skip-tags SKIP_TAGS A string specifying the tag or comma separated list of tags to be passed as --skip-tags to ansible-playbook. --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) -e EXTRA_VARS, --extra-vars EXTRA_VARS Set additional variables as key=value or yaml/json -y, --yes Use -y or --yes to skip the confirmation required before any upgrade operation. Use this with caution! --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. 57.20. overcloud generate fencing Generate fencing parameters Usage: Table 57.28. Positional arguments Value Summary instackenv None Table 57.29. Command arguments Value Summary -h, --help Show this help message and exit -a FENCE_ACTION, --action FENCE_ACTION Deprecated: this option is ignored. --delay DELAY Wait delay seconds before fencing is started --ipmi-lanplus Deprecated: this is the default. --ipmi-no-lanplus Do not use lanplus. defaults to: false --ipmi-cipher IPMI_CIPHER Ciphersuite to use (same as ipmitool -c parameter. --ipmi-level IPMI_LEVEL Privilegel level on ipmi device. valid levels: callback, user, operator, administrator. --output OUTPUT Write parameters to a file 57.21. overcloud image build Build images for the overcloud Usage: Table 57.30. Command arguments Value Summary -h, --help Show this help message and exit --config-file <yaml config file> Yaml config file specifying the image build. may be specified multiple times. Order is preserved, and later files will override some options in files. Other options will append. --image-name <image name> Name of image to build. may be specified multiple times. If unspecified, will build all images in given YAML files. --no-skip Skip build if cached image exists. --output-directory OUTPUT_DIRECTORY Output directory for images. defaults to USDTRIPLEO_ROOT,or current directory if unset. --temp-dir TEMP_DIR Temporary directory to use when building the images. Defaults to USDTMPDIR or current directory if unset. 57.22. overcloud image upload Make existing image files available for overcloud deployment. Usage: Table 57.31. Command arguments Value Summary -h, --help Show this help message and exit --image-path IMAGE_PATH Path to directory containing image files --os-image-name OS_IMAGE_NAME Openstack disk image filename --ironic-python-agent-name IPA_NAME Openstack ironic-python-agent (agent) image filename --http-boot HTTP_BOOT Root directory for the ironic-python-agent image. if uploading images for multiple architectures/platforms, vary this argument such that a distinct folder is created for each architecture/platform. --update-existing Update images if already exist --whole-disk When set, the overcloud-full image to be uploaded will be considered as a whole disk one --architecture ARCHITECTURE Architecture type for these images, x86_64 , i386 and ppc64le are common options. This option should match at least one arch value in instackenv.json --platform PLATFORM Platform type for these images. platform is a sub- category of architecture. For example you may have generic images for x86_64 but offer images specific to SandyBridge (SNB). --image-type {os,ironic-python-agent} If specified, allows to restrict the image type to upload (os for the overcloud image or ironic-python- agent for the ironic-python-agent one) --progress Show progress bar for upload files action --local Deprecated: copy files locally, even if there is an image service endpoint. The default has been changed to copy files locally. --no-local Upload files to image service. --local-path LOCAL_PATH Root directory for image file copy destination when there is no image endpoint, or when --local is specified 57.23. overcloud netenv validate Validate the network environment file. Usage: Table 57.32. Command arguments Value Summary -h, --help Show this help message and exit -f NETENV, --file NETENV Path to the network environment file 57.24. overcloud network extract Usage: Table 57.33. Command arguments Value Summary -h, --help Show this help message and exit --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) -o <network_deployment.yaml>, --output <network_deployment.yaml> The output file path describing the network deployment -y, --yes Skip yes/no prompt for existing files (assume yes). 57.25. overcloud network provision Usage: Table 57.34. Positional arguments Value Summary <network_data.yaml> Configuration file describing the network deployment. Table 57.35. Command arguments Value Summary -h, --help Show this help message and exit -o <network_environment.yaml>, --output <network_environment.yaml> The output network environment file path. -y, --yes Skip yes/no prompt for existing files (assume yes). --templates TEMPLATES The directory containing the heat templates to deploy --stack STACK Name or id of heat stack, when set the networks file will be copied to the working dir. --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files will be stored. Defaults to "USDHOME/overcloud-deploy-<stack>" 57.26. overcloud network unprovision Usage: Table 57.36. Positional arguments Value Summary <network_data.yaml> Configuration file describing the network deployment. Table 57.37. Command arguments Value Summary -h, --help Show this help message and exit -y, --yes Skip yes/no prompt (assume yes). 57.27. overcloud network vip extract Usage: Table 57.38. Command arguments Value Summary -h, --help Show this help message and exit --stack STACK Name of heat stack (default=env: overcloud_stack_name) -o <vip_data.yaml>, --output <vip_data.yaml> The output file path describing the virtual ip deployment -y, --yes Skip yes/no prompt for existing files (assume yes). 57.28. overcloud network vip provision Usage: Table 57.39. Positional arguments Value Summary <vip_data.yaml> Configuration file describing the network virtual ips. Table 57.40. Command arguments Value Summary -h, --help Show this help message and exit --stack STACK Name of heat stack (default=env: overcloud_stack_name) -o <vip_environment.yaml>, --output <vip_environment.yaml> The output virtual ip environment file path. -y, --yes Skip yes/no prompt for existing files (assume yes). --templates TEMPLATES The directory containing the heat templates to deploy --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files will be stored. Defaults to "USDHOME/overcloud-deploy-<stack>" 57.29. overcloud node bios configure Apply BIOS configuration on given nodes Usage: Table 57.41. Positional arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to configure bios Table 57.42. Command arguments Value Summary -h, --help Show this help message and exit --all-manageable Configure bios for all nodes currently in manageable state --configuration <configuration> Bios configuration (yaml/json string or file name). 57.30. overcloud node bios reset Reset BIOS configuration to factory default Usage: Table 57.43. Positional arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to reset bios Table 57.44. Command arguments Value Summary -h, --help Show this help message and exit --all-manageable Reset bios on all nodes currently in manageable state 57.31. overcloud node clean Run node(s) through cleaning. Usage: Table 57.45. Positional arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to be cleaned Table 57.46. Command arguments Value Summary -h, --help Show this help message and exit --all-manageable Clean all nodes currently in manageable state --provide Provide (make available) the nodes once cleaned 57.32. overcloud node configure Configure Node boot options. Usage: Table 57.47. Positional arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to be configured Table 57.48. Command arguments Value Summary -h, --help Show this help message and exit --all-manageable Configure all nodes currently in manageable state --deploy-kernel DEPLOY_KERNEL Image with deploy kernel. --deploy-ramdisk DEPLOY_RAMDISK Image with deploy ramdisk. --instance-boot-option {local,netboot} Whether to set instances for booting from local hard drive (local) or network (netboot). --boot-mode {uefi,bios} Whether to set the boot mode to uefi (uefi) or legacy BIOS (bios) --root-device ROOT_DEVICE Define the root device for nodes. can be either a list of device names (without /dev) to choose from or one of two strategies: largest or smallest. For it to work this command should be run after the introspection. --root-device-minimum-size ROOT_DEVICE_MINIMUM_SIZE Minimum size (in gib) of the detected root device. Used with --root-device. --overwrite-root-device-hints Whether to overwrite existing root device hints when --root-device is used. 57.33. overcloud node delete Delete overcloud nodes. Usage: Table 57.49. Positional arguments Value Summary <node> Node id(s) to delete (otherwise specified in the --baremetal-deployment file) Table 57.50. Command arguments Value Summary -h, --help Show this help message and exit -b <BAREMETAL DEPLOYMENT FILE>, --baremetal-deployment <BAREMETAL DEPLOYMENT FILE> Configuration file describing the baremetal deployment --stack STACK Name or id of heat stack to scale (default=env: OVERCLOUD_STACK_NAME) --timeout <TIMEOUT> Timeout in minutes to wait for the nodes to be deleted. Keep in mind that due to keystone session duration that timeout has an upper bound of 4 hours --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for the ssh port to become active. -y, --yes Skip yes/no prompt (assume yes). 57.34. overcloud node discover Discover overcloud nodes by polling their BMCs. Usage: Table 57.51. Command arguments Value Summary -h, --help Show this help message and exit --ip <ips> Ip address(es) to probe --range <range> Ip range to probe --credentials <key:value> Key/value pairs of possible credentials --port <ports> Bmc port(s) to probe --introspect Introspect the imported nodes --run-validations Run the pre-deployment validations. these external validations are from the TripleO Validations project. --provide Provide (make available) the nodes --no-deploy-image Skip setting the deploy kernel and ramdisk. --instance-boot-option {local,netboot} Whether to set instances for booting from local hard drive (local) or network (netboot). --concurrency CONCURRENCY Maximum number of nodes to introspect at once. --node-timeout NODE_TIMEOUT Maximum timeout for node introspection. --max-retries MAX_RETRIES Maximum introspection retries. --retry-timeout RETRY_TIMEOUT Maximum timeout between introspectionretries 57.35. overcloud node extract provisioned Usage: Table 57.52. Command arguments Value Summary -h, --help Show this help message and exit --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) -o <baremetal_deployment.yaml>, --output <baremetal_deployment.yaml> The output file path describing the baremetal deployment -y, --yes Skip yes/no prompt for existing files (assume yes). --roles-file ROLES_FILE, -r ROLES_FILE Role data definition file --networks-file NETWORKS_FILE, -n NETWORKS_FILE Network data definition file --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files will be stored. Defaults to "USDHOME/overcloud-deploy/<stack>" 57.36. overcloud node import Import baremetal nodes from a JSON, YAML or CSV file. The node status will be set to manageable by default. Usage: Table 57.53. Positional arguments Value Summary env_file None Table 57.54. Command arguments Value Summary -h, --help Show this help message and exit --introspect Introspect the imported nodes --run-validations Run the pre-deployment validations. these external validations are from the TripleO Validations project. --validate-only Validate the env_file and then exit without actually importing the nodes. --provide Provide (make available) the nodes --no-deploy-image Skip setting the deploy kernel and ramdisk. --instance-boot-option {local,netboot} Whether to set instances for booting from local hard drive (local) or network (netboot) --boot-mode {uefi,bios} Whether to set the boot mode to uefi (uefi) or legacy BIOS (bios) --http-boot HTTP_BOOT Root directory for the ironic-python-agent image --concurrency CONCURRENCY Maximum number of nodes to introspect at once. 57.37. overcloud node introspect Introspect specified nodes or all nodes in manageable state. Usage: Table 57.55. Positional arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to be introspected Table 57.56. Command arguments Value Summary -h, --help Show this help message and exit --all-manageable Introspect all nodes currently in manageable state --provide Provide (make available) the nodes once introspected --run-validations Run the pre-deployment validations. these external validations are from the TripleO Validations project. --concurrency CONCURRENCY Maximum number of nodes to introspect at once. --node-timeout NODE_TIMEOUT Maximum timeout for node introspection. --max-retries MAX_RETRIES Maximum introspection retries. --retry-timeout RETRY_TIMEOUT Maximum timeout between introspectionretries 57.38. overcloud node provide Mark nodes as available based on UUIDs or current manageable state. Usage: Table 57.57. Positional arguments Value Summary <node_uuid> Baremetal node uuids for the node(s) to be provided Table 57.58. Command arguments Value Summary -h, --help Show this help message and exit --all-manageable Provide all nodes currently in manageable state 57.39. overcloud node provision Provision new nodes using Ironic. Usage: Table 57.59. Positional arguments Value Summary <baremetal_deployment.yaml> Configuration file describing the baremetal deployment Table 57.60. Command arguments Value Summary -h, --help Show this help message and exit -o OUTPUT, --output OUTPUT The output environment file path -y, --yes Skip yes/no prompt for existing files (assume yes). --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to newly deployed nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access toovercloud nodes. when undefined the keywill be autodetected. --concurrency CONCURRENCY Maximum number of nodes to provision at once. (default=20) --timeout TIMEOUT Number of seconds to wait for the node provision to complete. (default=3600) --network-ports Deprecated! network ports will always be provisioned. Enable provisioning of network ports --network-config Apply network config to provisioned nodes. (implies " --network-ports") --templates TEMPLATES The directory containing the heat templates to deploy --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files will be stored. Defaults to "USDHOME/overcloud-deploy-<stack>" 57.40. overcloud node unprovision Unprovisions nodes using Ironic. Usage: Table 57.61. Positional arguments Value Summary <baremetal_deployment.yaml> Configuration file describing the baremetal deployment Table 57.62. Command arguments Value Summary -h, --help Show this help message and exit --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --all Unprovision every instance in the deployment -y, --yes Skip yes/no prompt (assume yes) --network-ports Deprecated! network ports will always be unprovisioned. Enable unprovisioning of network ports 57.41. overcloud profiles list List overcloud node profiles Usage: Table 57.63. Command arguments Value Summary -h, --help Show this help message and exit --all List all nodes, even those not available to nova. --control-scale CONTROL_SCALE New number of control nodes. --compute-scale COMPUTE_SCALE New number of compute nodes. --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes Table 57.64. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 57.65. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 57.66. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 57.67. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 57.42. overcloud profiles match Assign and validate profiles on nodes Usage: Table 57.68. Command arguments Value Summary -h, --help Show this help message and exit --dry-run Only run validations, but do not apply any changes. --control-scale CONTROL_SCALE New number of control nodes. --compute-scale COMPUTE_SCALE New number of compute nodes. --ceph-storage-scale CEPH_STORAGE_SCALE New number of ceph storage nodes. --block-storage-scale BLOCK_STORAGE_SCALE New number of cinder storage nodes. --swift-storage-scale SWIFT_STORAGE_SCALE New number of swift storage nodes. --control-flavor CONTROL_FLAVOR Nova flavor to use for control nodes. --compute-flavor COMPUTE_FLAVOR Nova flavor to use for compute nodes. --ceph-storage-flavor CEPH_STORAGE_FLAVOR Nova flavor to use for ceph storage nodes. --block-storage-flavor BLOCK_STORAGE_FLAVOR Nova flavor to use for cinder storage nodes --swift-storage-flavor SWIFT_STORAGE_FLAVOR Nova flavor to use for swift storage nodes 57.43. overcloud raid create Create RAID on given nodes Usage: Table 57.69. Positional arguments Value Summary configuration Raid configuration (yaml/json string or file name). Table 57.70. Command arguments Value Summary -h, --help Show this help message and exit --node NODE Nodes to create raid on (expected to be in manageable state). Can be specified multiple times. 57.44. overcloud restore Restore the Overcloud Usage: Table 57.71. Command arguments Value Summary --inventory INVENTORY Tripleo inventory file generated with tripleo-ansible- inventory command. Defaults to: /root/config- download/overcloud/tripleo-ansible-inventory.yaml --stack [STACK] Name or id of the stack to be used(default=env: OVERCLOUD_STACK_NAME) --node-name NODE_NAME Controller name is a required parameter which defines the controller node to be restored. --extra-vars EXTRA_VARS Set additional variables as dict or as an absolute path of a JSON or YAML file type. i.e. --extra-vars {"key": "val", "key2": "val2"} i.e. --extra-vars /path/to/my_vars.yaml i.e. --extra-vars /path/to/my_vars.json. For more information about the variables that can be passed, visit: https://opendev.org/openstack/tripleo-ansible/src/bran ch/master/tripleo_ansible/roles/backup_and_restore/def aults/main.yml. 57.45. overcloud role list List availables roles. Usage: Table 57.72. Command arguments Value Summary -h, --help Show this help message and exit --roles-path <roles directory> Filesystem path containing the role yaml files. by default this is /usr/share/openstack-tripleo-heat- templates/roles 57.46. overcloud role show Show information about a given role. Usage: Table 57.73. Positional arguments Value Summary <role> Role to display more information about. Table 57.74. Command arguments Value Summary -h, --help Show this help message and exit --roles-path <roles directory> Filesystem path containing the role yaml files. by default this is /usr/share/openstack-tripleo-heat- templates/roles 57.47. overcloud roles generate Generate roles_data.yaml file Usage: Table 57.75. Positional arguments Value Summary <role> List of roles to use to generate the roles_data.yaml file for the deployment. NOTE: Ordering is important if no role has the "primary" and "controller" tags. If no role is tagged then the first role listed will be considered the primary role. This usually is the controller role. Table 57.76. Command arguments Value Summary -h, --help Show this help message and exit --roles-path <roles directory> Filesystem path containing the role yaml files. by default this is /usr/share/openstack-tripleo-heat- templates/roles -o <output file>, --output-file <output file> File to capture all output to. for example, roles_data.yaml --skip-validate Skip role metadata type validation whengenerating the roles_data.yaml 57.48. overcloud status Get deployment status Usage: Table 57.77. Command arguments Value Summary -h, --help Show this help message and exit --plan PLAN, --stack PLAN Name of the stack/plan. (default: overcloud) --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files are stored. Defaults to "USDHOME/overcloud-deploy/<stack>" 57.49. overcloud support report collect Run sosreport on selected servers. Usage: Table 57.78. Positional arguments Value Summary server_name Server name, group name, or partial name to match. for example "Controller" will match all controllers for an environment. Table 57.79. Command arguments Value Summary -h, --help Show this help message and exit --stack STACK Stack name to use for log collection. -c CONTAINER, --container CONTAINER This option no-longer has any effect. -o DESTINATION, --output DESTINATION Output directory for the report --skip-container-delete This option no-longer has any effect. -t TIMEOUT, --timeout TIMEOUT This option no-longer has any effect. -n CONCURRENCY, --concurrency CONCURRENCY This option no-longer has any effect. --collect-only This option no-longer has any effect. --download-only This option no-longer has any effect. 57.50. overcloud update prepare Use Heat to update and render the new Ansible playbooks based on the updated templates. These playbooks will be rendered and used during the update run step to perform the minor update of the overcloud nodes. Usage: Table 57.80. Command arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT This option no longer has any effect. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data_default.yaml in the --templates directory --vip-file VIP_FILE Configuration file describing the network virtual ips. --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Deprecated: only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. Not supported anymore. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Deprecated: use pre-provisioned overcloud nodes.now the default and this CLI option has no effect. --provision-nodes Provision overcloud nodes with heat. --config-download Deprecated: run deployment via config-download mechanism. This is now the default, and this CLI options has no effect. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and download the config. No software configuration, setup, or any changes will be applied to overcloud nodes. --config-download-only Disable the stack create and setup, and only run the config-download workflow to apply the software configuration. Requires that config-download setup was previously completed, either with --stack-only and --setup-only or a full deployment --setup-only Disable the stack and config-download workflow to apply the software configuration and only run the setup to enable ssh connectivity. --config-dir CONFIG_DIR The directory where the configuration files will be pushed --config-type CONFIG_TYPE Only used when "--setup-only" is invoked. type of object config to be extract from the deployment, defaults to all keys available --no-preserve-config Only used when "--setup-only" is invoked. if specified, will delete and recreate the --config-dir if it already exists. Default is to use the existing dir location and overwrite files. Files in --config- dir not from the stack will be preserved by default. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. When not specified, <working-dir>/config- download will be used. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b [<baremetal_deployment.yaml>], --baremetal-deployment [<baremetal_deployment.yaml>] Deploy baremetal nodes, network and virtual ip addresses as defined in baremetal_deployment.yaml along with overcloud. If no baremetal_deployment YAML file is given, the tripleo-<stack_name>-baremetal- deployment.yaml file in the working-dir will be used. --network-config Apply network config to provisioned nodes. (implies " --network-ports") --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --tags TAGS A list of tags to use when running the config- download ansible-playbook command. --skip-tags SKIP_TAGS A list of tags to skip when running the config- download ansible-playbook command. --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. --disable-container-prepare Disable the container preparation actions to prevent container tags from being updated and new containers from being fetched. If you skip this but do not have the container parameters configured, the deployment action may fail. --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files will be stored. Defaults to "USDHOME/overcloud-deploy/<stack>" --heat-type {pod,container,native} The type of heat process to use to execute the deployment. pod (Default): Use an ephemeral Heat pod. container (Experimental): Use an ephemeral Heat container. native (Experimental): Use an ephemeral Heat process. --heat-container-api-image <HEAT_CONTAINER_API_IMAGE> The container image to use when launching the heat-api process. Only used when --heat-type=pod. Defaults to: localhost/tripleo/openstack-heat-api:ephemeral --heat-container-engine-image <HEAT_CONTAINER_ENGINE_IMAGE> The container image to use when launching the heat- engine process. Only used when --heat-type=pod. Defaults to: localhost/tripleo/openstack-heat- engine:ephemeral --rm-heat If specified and --heat-type is container or pod any existing container or pod of a ephemeral Heat process will be deleted first. Ignored if --heat-type is native. --skip-heat-pull When --heat-type is pod or container, assume the container image has already been pulled --disable-protected-resource-types Disable protected resource type overrides. resources types that are used internally are protected, and cannot be overridden in the user environment. Setting this argument disables the protection, allowing the protected resource types to be override in the user environment. -y, --yes Use -y or --yes to skip any confirmation required before the deploy operation. Use this with caution! --allow-deprecated-network-data Set this to allow using deprecated network data yaml definition schema. 57.51. overcloud update run Run minor update ansible playbooks on Overcloud nodes Usage: Table 57.81. Command arguments Value Summary -h, --help Show this help message and exit --limit LIMIT A string that identifies a single node or comma- separated list of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". When DeploymentServerBlacklist is defined, excluded_overcloud group is added at the end of the string and nodes from the group will be skipped during the execution. --playbook [PLAYBOOK ... ] Ansible playbook to use for the minor update. can be used multiple times. Set this to each of those playbooks in consecutive invocations of this command if you prefer to run them manually. Note: make sure to run all playbooks so that all services are updated and running with the target version configuration. --ssh-user SSH_USER Deprecated: only tripleo-admin should be used as ssh user. --static-inventory STATIC_INVENTORY Deprecated: tripleo-ansible-inventory.yaml in working dir will be used. --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) --tags TAGS A list of tags to use when running the config- download ansible-playbook command. --skip-tags SKIP_TAGS A list of tags to skip when running the config- download ansible-playbook command. -y, --yes Use -y or --yes to skip the confirmation required before any update operation. Use this with caution! --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. 57.52. overcloud upgrade converge Major upgrade converge - reset Heat resources in the stored plan This is the last step for completion of a overcloud major upgrade. The main task is updating the plan and stack to unblock future stack updates. For the major upgrade workflow we have set specific values for some stack Heat resources. This unsets those back to their default values. Usage: Table 57.82. Command arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT This option no longer has any effect. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data_default.yaml in the --templates directory --vip-file VIP_FILE Configuration file describing the network virtual ips. --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Deprecated: only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. Not supported anymore. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Deprecated: use pre-provisioned overcloud nodes.now the default and this CLI option has no effect. --provision-nodes Provision overcloud nodes with heat. --config-download Deprecated: run deployment via config-download mechanism. This is now the default, and this CLI options has no effect. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and download the config. No software configuration, setup, or any changes will be applied to overcloud nodes. --config-download-only Disable the stack create and setup, and only run the config-download workflow to apply the software configuration. Requires that config-download setup was previously completed, either with --stack-only and --setup-only or a full deployment --setup-only Disable the stack and config-download workflow to apply the software configuration and only run the setup to enable ssh connectivity. --config-dir CONFIG_DIR The directory where the configuration files will be pushed --config-type CONFIG_TYPE Only used when "--setup-only" is invoked. type of object config to be extract from the deployment, defaults to all keys available --no-preserve-config Only used when "--setup-only" is invoked. if specified, will delete and recreate the --config-dir if it already exists. Default is to use the existing dir location and overwrite files. Files in --config- dir not from the stack will be preserved by default. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. When not specified, <working-dir>/config- download will be used. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b [<baremetal_deployment.yaml>], --baremetal-deployment [<baremetal_deployment.yaml>] Deploy baremetal nodes, network and virtual ip addresses as defined in baremetal_deployment.yaml along with overcloud. If no baremetal_deployment YAML file is given, the tripleo-<stack_name>-baremetal- deployment.yaml file in the working-dir will be used. --network-config Apply network config to provisioned nodes. (implies " --network-ports") --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --tags TAGS A list of tags to use when running the config- download ansible-playbook command. --skip-tags SKIP_TAGS A list of tags to skip when running the config- download ansible-playbook command. --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. --disable-container-prepare Disable the container preparation actions to prevent container tags from being updated and new containers from being fetched. If you skip this but do not have the container parameters configured, the deployment action may fail. --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files will be stored. Defaults to "USDHOME/overcloud-deploy/<stack>" --heat-type {pod,container,native} The type of heat process to use to execute the deployment. pod (Default): Use an ephemeral Heat pod. container (Experimental): Use an ephemeral Heat container. native (Experimental): Use an ephemeral Heat process. --heat-container-api-image <HEAT_CONTAINER_API_IMAGE> The container image to use when launching the heat-api process. Only used when --heat-type=pod. Defaults to: localhost/tripleo/openstack-heat-api:ephemeral --heat-container-engine-image <HEAT_CONTAINER_ENGINE_IMAGE> The container image to use when launching the heat- engine process. Only used when --heat-type=pod. Defaults to: localhost/tripleo/openstack-heat- engine:ephemeral --rm-heat If specified and --heat-type is container or pod any existing container or pod of a ephemeral Heat process will be deleted first. Ignored if --heat-type is native. --skip-heat-pull When --heat-type is pod or container, assume the container image has already been pulled --disable-protected-resource-types Disable protected resource type overrides. resources types that are used internally are protected, and cannot be overridden in the user environment. Setting this argument disables the protection, allowing the protected resource types to be override in the user environment. -y, --yes Use -y or --yes to skip any confirmation required before the deploy operation. Use this with caution! --allow-deprecated-network-data Set this to allow using deprecated network data yaml definition schema. 57.53. overcloud upgrade prepare Run heat stack update for overcloud nodes to refresh heat stack outputs. The heat stack outputs are what we use later on to generate ansible playbooks which deliver the major upgrade workflow. This is used as the first step for a major upgrade of your overcloud. Usage: Table 57.83. Command arguments Value Summary --templates [TEMPLATES] The directory containing the heat templates to deploy --stack STACK Stack name to create or update --timeout <TIMEOUT>, -t <TIMEOUT> Deployment timeout in minutes. --libvirt-type {kvm,qemu} Libvirt domain type. --ntp-server NTP_SERVER The ntp for overcloud nodes. --no-proxy NO_PROXY A comma separated list of hosts that should not be proxied. --overcloud-ssh-user OVERCLOUD_SSH_USER User for ssh access to overcloud nodes --overcloud-ssh-key OVERCLOUD_SSH_KEY Key path for ssh access to overcloud nodes. Whenundefined the key will be autodetected. --overcloud-ssh-network OVERCLOUD_SSH_NETWORK Network name to use for ssh access to overcloud nodes. --overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT This option no longer has any effect. --overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT Timeout for the ssh port to become active. --environment-file <HEAT ENVIRONMENT FILE>, -e <HEAT ENVIRONMENT FILE> Environment files to be passed to the heat stack- create or heat stack-update command. (Can be specified more than once.) --environment-directory <HEAT ENVIRONMENT DIRECTORY> Environment file directories that are automatically added to the heat stack-create or heat stack-update commands. Can be specified more than once. Files in directories are loaded in ascending sort order. --roles-file ROLES_FILE, -r ROLES_FILE Roles file, overrides the default roles_data.yaml in the --templates directory. May be an absolute path or the path relative to --templates --networks-file NETWORKS_FILE, -n NETWORKS_FILE Networks file, overrides the default network_data_default.yaml in the --templates directory --vip-file VIP_FILE Configuration file describing the network virtual ips. --no-cleanup Don't cleanup temporary files, just log their location --update-plan-only Deprecated: only update the plan. do not perform the actual deployment. NOTE: Will move to a discrete command in a future release. Not supported anymore. --validation-errors-nonfatal Allow the deployment to continue in spite of validation errors. Note that attempting deployment while errors exist is likely to fail. --validation-warnings-fatal Exit if there are warnings from the configuration pre- checks. --disable-validations Deprecated. disable the pre-deployment validations entirely. These validations are the built-in pre- deployment validations. To enable external validations from tripleo-validations, use the --run-validations flag. These validations are now run via the external validations in tripleo-validations. --inflight-validations Activate in-flight validations during the deploy. in- flight validations provide a robust way to ensure deployed services are running right after their activation. Defaults to False. --dry-run Only run validations, but do not apply any changes. --run-validations Run external validations from the tripleo-validations project. --skip-postconfig Skip the overcloud post-deployment configuration. --force-postconfig Force the overcloud post-deployment configuration. --skip-deploy-identifier Skip generation of a unique identifier for the DeployIdentifier parameter. The software configuration deployment steps will only be triggered if there is an actual change to the configuration. This option should be used with Caution, and only if there is confidence that the software configuration does not need to be run, such as when scaling out certain roles. --answers-file ANSWERS_FILE Path to a yaml file with arguments and parameters. --disable-password-generation Disable password generation. --deployed-server Deprecated: use pre-provisioned overcloud nodes.now the default and this CLI option has no effect. --provision-nodes Provision overcloud nodes with heat. --config-download Deprecated: run deployment via config-download mechanism. This is now the default, and this CLI options has no effect. --no-config-download, --stack-only Disable the config-download workflow and only create the stack and download the config. No software configuration, setup, or any changes will be applied to overcloud nodes. --config-download-only Disable the stack create and setup, and only run the config-download workflow to apply the software configuration. Requires that config-download setup was previously completed, either with --stack-only and --setup-only or a full deployment --setup-only Disable the stack and config-download workflow to apply the software configuration and only run the setup to enable ssh connectivity. --config-dir CONFIG_DIR The directory where the configuration files will be pushed --config-type CONFIG_TYPE Only used when "--setup-only" is invoked. type of object config to be extract from the deployment, defaults to all keys available --no-preserve-config Only used when "--setup-only" is invoked. if specified, will delete and recreate the --config-dir if it already exists. Default is to use the existing dir location and overwrite files. Files in --config- dir not from the stack will be preserved by default. --output-dir OUTPUT_DIR Directory to use for saved output when using --config- download. When not specified, <working-dir>/config- download will be used. --override-ansible-cfg OVERRIDE_ANSIBLE_CFG Path to ansible configuration file. the configuration in the file will override any configuration used by config-download by default. --config-download-timeout CONFIG_DOWNLOAD_TIMEOUT Timeout (in minutes) to use for config-download steps. If unset, will default to however much time is leftover from the --timeout parameter after the stack operation. --deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER The path to python interpreter to use for the deployment actions. This may need to be used if deploying on a python2 host from a python3 system or vice versa. -b [<baremetal_deployment.yaml>], --baremetal-deployment [<baremetal_deployment.yaml>] Deploy baremetal nodes, network and virtual ip addresses as defined in baremetal_deployment.yaml along with overcloud. If no baremetal_deployment YAML file is given, the tripleo-<stack_name>-baremetal- deployment.yaml file in the working-dir will be used. --network-config Apply network config to provisioned nodes. (implies " --network-ports") --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --tags TAGS A list of tags to use when running the config- download ansible-playbook command. --skip-tags SKIP_TAGS A list of tags to skip when running the config- download ansible-playbook command. --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command. --disable-container-prepare Disable the container preparation actions to prevent container tags from being updated and new containers from being fetched. If you skip this but do not have the container parameters configured, the deployment action may fail. --working-dir WORKING_DIR The working directory for the deployment where all input, output, and generated files will be stored. Defaults to "USDHOME/overcloud-deploy/<stack>" --heat-type {pod,container,native} The type of heat process to use to execute the deployment. pod (Default): Use an ephemeral Heat pod. container (Experimental): Use an ephemeral Heat container. native (Experimental): Use an ephemeral Heat process. --heat-container-api-image <HEAT_CONTAINER_API_IMAGE> The container image to use when launching the heat-api process. Only used when --heat-type=pod. Defaults to: localhost/tripleo/openstack-heat-api:ephemeral --heat-container-engine-image <HEAT_CONTAINER_ENGINE_IMAGE> The container image to use when launching the heat- engine process. Only used when --heat-type=pod. Defaults to: localhost/tripleo/openstack-heat- engine:ephemeral --rm-heat If specified and --heat-type is container or pod any existing container or pod of a ephemeral Heat process will be deleted first. Ignored if --heat-type is native. --skip-heat-pull When --heat-type is pod or container, assume the container image has already been pulled --disable-protected-resource-types Disable protected resource type overrides. resources types that are used internally are protected, and cannot be overridden in the user environment. Setting this argument disables the protection, allowing the protected resource types to be override in the user environment. -y, --yes Use -y or --yes to skip any confirmation required before the deploy operation. Use this with caution! --allow-deprecated-network-data Set this to allow using deprecated network data yaml definition schema. 57.54. overcloud upgrade run Run major upgrade ansible playbooks on Overcloud nodes This will run the major upgrade ansible playbooks on the overcloud. By default all playbooks are executed, that is the upgrade_steps_playbook.yaml then the deploy_steps_playbook.yaml and then the post_upgrade_steps_playbook.yaml. The upgrade playbooks are made available after completion of the overcloud upgrade prepare command. This overcloud upgrade run command is the second step in the major upgrade workflow. Usage: Table 57.84. Command arguments Value Summary -h, --help Show this help message and exit --limit LIMIT A string that identifies a single node or comma- separatedlist of nodes the config-download Ansible playbook execution will be limited to. For example: --limit "compute-0,compute-1,compute-5". --playbook [PLAYBOOK ... ] Ansible playbook to use for the minor update. can be used multiple times. Set this to each of those playbooks in consecutive invocations of this command if you prefer to run them manually. Note: make sure to run all playbooks so that all services are updated and running with the target version configuration. --static-inventory STATIC_INVENTORY Deprecated: tripleo-ansible-inventory.yaml in working dir will be used. --ssh-user SSH_USER Deprecated: only tripleo-admin should be used as ssh user. --tags TAGS A string specifying the tag or comma separated list of tags to be passed as --tags to ansible-playbook. --skip-tags SKIP_TAGS A string specifying the tag or comma separated list of tags to be passed as --skip-tags to ansible-playbook. The currently supported values are validation and pre-upgrade . In particular validation is useful if you must re-run following a failed upgrade and some services cannot be started. --stack STACK Name or id of heat stack (default=env: OVERCLOUD_STACK_NAME) -y, --yes Use -y or --yes to skip the confirmation required before any upgrade operation. Use this with caution! --ansible-forks ANSIBLE_FORKS The number of ansible forks to use for the config- download ansible-playbook command.
[ "openstack overcloud admin authorize [-h] [--stack STACK] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--static-inventory STATIC_INVENTORY] [--limit LIMIT_HOSTS]", "openstack overcloud backup snapshot [--inventory INVENTORY] [--remove] [--revert] [--extra-vars EXTRA_VARS]", "openstack overcloud backup [--init [INIT]] [--setup-nfs] [--setup-rear] [--setup-ironic] [--cron] [--inventory INVENTORY] [--storage-ip STORAGE_IP] [--extra-vars EXTRA_VARS]", "openstack overcloud cell export [-h] [--control-plane-stack <control plane stack>] [--cell-stack <cell stack>] [--output-file <output file>] [--working-dir WORKING_DIR] [--config-download-dir CONFIG_DOWNLOAD_DIR] [--force-overwrite]", "openstack overcloud ceph deploy [-h] -o <deployed_ceph.yaml> [-y] [--skip-user-create] [--skip-hosts-config] [--skip-container-registry-config] [--skip-ntp] [--cephadm-ssh-user CEPHADM_SSH_USER] [--stack STACK] [--working-dir WORKING_DIR] [--roles-data ROLES_DATA] [--network-data NETWORK_DATA] [--public-network-name PUBLIC_NETWORK_NAME] [--cluster-network-name CLUSTER_NETWORK_NAME] [--cluster CLUSTER] [--mon-ip MON_IP] [--config CONFIG] [--cephadm-extra-args CEPHADM_EXTRA_ARGS] [--force] [--ansible-extra-vars ANSIBLE_EXTRA_VARS] [--ceph-client-username CEPH_CLIENT_USERNAME] [--ceph-client-key CEPH_CLIENT_KEY] [--skip-cephx-keys] [--single-host-defaults] [--ntp-server NTP_SERVER | --ntp-heat-env-file NTP_HEAT_ENV_FILE] [--ceph-spec CEPH_SPEC | --osd-spec OSD_SPEC] [--crush-hierarchy CRUSH_HIERARCHY] [--tld TLD] [--standalone] [--container-image-prepare CONTAINER_IMAGE_PREPARE] [--cephadm-default-container] [--container-namespace CONTAINER_NAMESPACE] [--container-image CONTAINER_IMAGE] [--container-tag CONTAINER_TAG] [--registry-url REGISTRY_URL] [--registry-username REGISTRY_USERNAME] [--registry-password REGISTRY_PASSWORD] [<deployed_baremetal.yaml>]", "openstack overcloud ceph spec [-h] -o <ceph_spec.yaml> [-y] [--stack STACK] [--working-dir WORKING_DIR] [--roles-data ROLES_DATA] [--mon-ip MON_IP] [--standalone] [--tld TLD] [--osd-spec OSD_SPEC | --crush-hierarchy CRUSH_HIERARCHY] [<deployed_baremetal.yaml>]", "openstack overcloud ceph user disable [-h] [-y] [--cephadm-ssh-user CEPHADM_SSH_USER] [--stack STACK] [--working-dir WORKING_DIR] --fsid <FSID> [--standalone] <ceph_spec.yaml>", "openstack overcloud ceph user enable [-h] [--fsid <FSID>] [--standalone] [--cephadm-ssh-user CEPHADM_SSH_USER] [--stack STACK] [--working-dir WORKING_DIR] <ceph_spec.yaml>", "openstack overcloud container image build [-h] [--config-file <yaml config file>] --kolla-config-file <config file> [--list-images] [--list-dependencies] [--exclude <container-name>] [--use-buildah] [--work-dir <container builds directory>] [--build-timeout <build timeout in seconds>]", "openstack overcloud container image prepare [-h] [--template-file <yaml template file>] [--push-destination <location>] [--tag <tag>] [--tag-from-label <image label>] [--namespace <namespace>] [--prefix <prefix>] [--suffix <suffix>] [--set <variable=value>] [--exclude <regex>] [--include <regex>] [--output-images-file <file path>] [--environment-file <file path>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--output-env-file <file path>] [--roles-file ROLES_FILE] [--modify-role MODIFY_ROLE] [--modify-vars MODIFY_VARS]", "openstack overcloud container image tag discover [-h] --image <container image> [--tag-from-label <image label>]", "openstack overcloud container image upload [-h] --config-file <yaml config file> [--cleanup <full, partial, none>]", "openstack overcloud credentials [-h] [--directory [DIRECTORY]] [--working-dir WORKING_DIR] stack", "openstack overcloud delete [-h] [-y] [-s] [-b <baremetal_deployment.yaml>] [--networks-file <network_data.yaml>] [--network-ports] [--heat-type {installed,pod,container,native}] [stack]", "openstack overcloud deploy [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--vip-file VIP_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--provision-nodes] [--config-download] [--no-config-download] [--config-download-only] [--setup-only] [--config-dir CONFIG_DIR] [--config-type CONFIG_TYPE] [--no-preserve-config] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b [<baremetal_deployment.yaml>]] [--network-config] [--limit LIMIT] [--tags TAGS] [--skip-tags SKIP_TAGS] [--ansible-forks ANSIBLE_FORKS] [--disable-container-prepare] [--working-dir WORKING_DIR] [--heat-type {pod,container,native}] [--heat-container-api-image <HEAT_CONTAINER_API_IMAGE>] [--heat-container-engine-image <HEAT_CONTAINER_ENGINE_IMAGE>] [--rm-heat] [--skip-heat-pull] [--disable-protected-resource-types] [-y] [--allow-deprecated-network-data]", "openstack overcloud export ceph [-h] [--stack <stack>] [--cephx-key-client-name <cephx>] [--output-file <output file>] [--force-overwrite] [--config-download-dir CONFIG_DOWNLOAD_DIR]", "openstack overcloud export [-h] [--stack <stack>] [--output-file <output file>] [--force-overwrite] [--working-dir WORKING_DIR] [--config-download-dir CONFIG_DOWNLOAD_DIR] [--no-password-excludes]", "openstack overcloud external-update run [-h] [--static-inventory STATIC_INVENTORY] [--ssh-user SSH_USER] [--tags TAGS] [--skip-tags SKIP_TAGS] [--stack STACK] [-e EXTRA_VARS] [-y] [--limit LIMIT] [--ansible-forks ANSIBLE_FORKS] [--refresh]", "openstack overcloud external-upgrade run [-h] [--static-inventory STATIC_INVENTORY] [--ssh-user SSH_USER] [--tags TAGS] [--skip-tags SKIP_TAGS] [--stack STACK] [-e EXTRA_VARS] [-y] [--limit LIMIT] [--ansible-forks ANSIBLE_FORKS]", "openstack overcloud generate fencing [-h] [-a FENCE_ACTION] [--delay DELAY] [--ipmi-lanplus] [--ipmi-no-lanplus] [--ipmi-cipher IPMI_CIPHER] [--ipmi-level IPMI_LEVEL] [--output OUTPUT] instackenv", "openstack overcloud image build [-h] [--config-file <yaml config file>] [--image-name <image name>] [--no-skip] [--output-directory OUTPUT_DIRECTORY] [--temp-dir TEMP_DIR]", "openstack overcloud image upload [-h] [--image-path IMAGE_PATH] [--os-image-name OS_IMAGE_NAME] [--ironic-python-agent-name IPA_NAME] [--http-boot HTTP_BOOT] [--update-existing] [--whole-disk] [--architecture ARCHITECTURE] [--platform PLATFORM] [--image-type {os,ironic-python-agent}] [--progress] [--local] [--no-local] [--local-path LOCAL_PATH]", "openstack overcloud netenv validate [-h] [-f NETENV]", "openstack overcloud network extract [-h] --stack STACK -o <network_deployment.yaml> [-y]", "openstack overcloud network provision [-h] -o <network_environment.yaml> [-y] [--templates TEMPLATES] [--stack STACK] [--working-dir WORKING_DIR] <network_data.yaml>", "openstack overcloud network unprovision [-h] [-y] <network_data.yaml>", "openstack overcloud network vip extract [-h] --stack STACK -o <vip_data.yaml> [-y]", "openstack overcloud network vip provision [-h] --stack STACK -o <vip_environment.yaml> [-y] [--templates TEMPLATES] [--working-dir WORKING_DIR] <vip_data.yaml>", "openstack overcloud node bios configure [-h] [--all-manageable] [--configuration <configuration>] [<node_uuid> ...]", "openstack overcloud node bios reset [-h] [--all-manageable] [<node_uuid> ...]", "openstack overcloud node clean [-h] [--all-manageable] [--provide] [<node_uuid> ...]", "openstack overcloud node configure [-h] [--all-manageable] [--deploy-kernel DEPLOY_KERNEL] [--deploy-ramdisk DEPLOY_RAMDISK] [--instance-boot-option {local,netboot}] [--boot-mode {uefi,bios}] [--root-device ROOT_DEVICE] [--root-device-minimum-size ROOT_DEVICE_MINIMUM_SIZE] [--overwrite-root-device-hints] [<node_uuid> ...]", "openstack overcloud node delete [-h] [-b <BAREMETAL DEPLOYMENT FILE>] [--stack STACK] [--timeout <TIMEOUT>] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [-y] [<node> ...]", "openstack overcloud node discover [-h] (--ip <ips> | --range <range>) --credentials <key:value> [--port <ports>] [--introspect] [--run-validations] [--provide] [--no-deploy-image] [--instance-boot-option {local,netboot}] [--concurrency CONCURRENCY] [--node-timeout NODE_TIMEOUT] [--max-retries MAX_RETRIES] [--retry-timeout RETRY_TIMEOUT]", "openstack overcloud node extract provisioned [-h] [--stack STACK] [-o <baremetal_deployment.yaml>] [-y] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--working-dir WORKING_DIR]", "openstack overcloud node import [-h] [--introspect] [--run-validations] [--validate-only] [--provide] [--no-deploy-image] [--instance-boot-option {local,netboot}] [--boot-mode {uefi,bios}] [--http-boot HTTP_BOOT] [--concurrency CONCURRENCY] env_file", "openstack overcloud node introspect [-h] [--all-manageable] [--provide] [--run-validations] [--concurrency CONCURRENCY] [--node-timeout NODE_TIMEOUT] [--max-retries MAX_RETRIES] [--retry-timeout RETRY_TIMEOUT] [<node_uuid> ...]", "openstack overcloud node provide [-h] [--all-manageable] [<node_uuid> ...]", "openstack overcloud node provision [-h] [-o OUTPUT] [-y] [--stack STACK] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--concurrency CONCURRENCY] [--timeout TIMEOUT] [--network-ports] [--network-config] [--templates TEMPLATES] [--working-dir WORKING_DIR] <baremetal_deployment.yaml>", "openstack overcloud node unprovision [-h] [--stack STACK] [--all] [-y] [--network-ports] <baremetal_deployment.yaml>", "openstack overcloud profiles list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--all] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR]", "openstack overcloud profiles match [-h] [--dry-run] [--control-scale CONTROL_SCALE] [--compute-scale COMPUTE_SCALE] [--ceph-storage-scale CEPH_STORAGE_SCALE] [--block-storage-scale BLOCK_STORAGE_SCALE] [--swift-storage-scale SWIFT_STORAGE_SCALE] [--control-flavor CONTROL_FLAVOR] [--compute-flavor COMPUTE_FLAVOR] [--ceph-storage-flavor CEPH_STORAGE_FLAVOR] [--block-storage-flavor BLOCK_STORAGE_FLAVOR] [--swift-storage-flavor SWIFT_STORAGE_FLAVOR]", "openstack overcloud raid create [-h] --node NODE configuration", "openstack overcloud restore [--inventory INVENTORY] [--stack [STACK]] --node-name NODE_NAME [--extra-vars EXTRA_VARS]", "openstack overcloud role list [-h] [--roles-path <roles directory>]", "openstack overcloud role show [-h] [--roles-path <roles directory>] <role>", "openstack overcloud roles generate [-h] [--roles-path <roles directory>] [-o <output file>] [--skip-validate] <role> [<role> ...]", "openstack overcloud status [-h] [--plan PLAN] [--working-dir WORKING_DIR]", "openstack overcloud support report collect [-h] [--stack STACK] [-c CONTAINER] [-o DESTINATION] [--skip-container-delete] [-t TIMEOUT] [-n CONCURRENCY] [--collect-only] [--download-only] server_name", "openstack overcloud update prepare [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--vip-file VIP_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--provision-nodes] [--config-download] [--no-config-download] [--config-download-only] [--setup-only] [--config-dir CONFIG_DIR] [--config-type CONFIG_TYPE] [--no-preserve-config] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b [<baremetal_deployment.yaml>]] [--network-config] [--limit LIMIT] [--tags TAGS] [--skip-tags SKIP_TAGS] [--ansible-forks ANSIBLE_FORKS] [--disable-container-prepare] [--working-dir WORKING_DIR] [--heat-type {pod,container,native}] [--heat-container-api-image <HEAT_CONTAINER_API_IMAGE>] [--heat-container-engine-image <HEAT_CONTAINER_ENGINE_IMAGE>] [--rm-heat] [--skip-heat-pull] [--disable-protected-resource-types] [-y] [--allow-deprecated-network-data]", "openstack overcloud update run [-h] --limit LIMIT [--playbook [PLAYBOOK ...]] [--ssh-user SSH_USER] [--static-inventory STATIC_INVENTORY] [--stack STACK] [--tags TAGS] [--skip-tags SKIP_TAGS] [-y] [--ansible-forks ANSIBLE_FORKS]", "openstack overcloud upgrade converge [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--vip-file VIP_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--provision-nodes] [--config-download] [--no-config-download] [--config-download-only] [--setup-only] [--config-dir CONFIG_DIR] [--config-type CONFIG_TYPE] [--no-preserve-config] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b [<baremetal_deployment.yaml>]] [--network-config] [--limit LIMIT] [--tags TAGS] [--skip-tags SKIP_TAGS] [--ansible-forks ANSIBLE_FORKS] [--disable-container-prepare] [--working-dir WORKING_DIR] [--heat-type {pod,container,native}] [--heat-container-api-image <HEAT_CONTAINER_API_IMAGE>] [--heat-container-engine-image <HEAT_CONTAINER_ENGINE_IMAGE>] [--rm-heat] [--skip-heat-pull] [--disable-protected-resource-types] [-y] [--allow-deprecated-network-data]", "openstack overcloud upgrade prepare [--templates [TEMPLATES]] [--stack STACK] [--timeout <TIMEOUT>] [--libvirt-type {kvm,qemu}] [--ntp-server NTP_SERVER] [--no-proxy NO_PROXY] [--overcloud-ssh-user OVERCLOUD_SSH_USER] [--overcloud-ssh-key OVERCLOUD_SSH_KEY] [--overcloud-ssh-network OVERCLOUD_SSH_NETWORK] [--overcloud-ssh-enable-timeout OVERCLOUD_SSH_ENABLE_TIMEOUT] [--overcloud-ssh-port-timeout OVERCLOUD_SSH_PORT_TIMEOUT] [--environment-file <HEAT ENVIRONMENT FILE>] [--environment-directory <HEAT ENVIRONMENT DIRECTORY>] [--roles-file ROLES_FILE] [--networks-file NETWORKS_FILE] [--vip-file VIP_FILE] [--no-cleanup] [--update-plan-only] [--validation-errors-nonfatal] [--validation-warnings-fatal] [--disable-validations] [--inflight-validations] [--dry-run] [--run-validations] [--skip-postconfig] [--force-postconfig] [--skip-deploy-identifier] [--answers-file ANSWERS_FILE] [--disable-password-generation] [--deployed-server] [--provision-nodes] [--config-download] [--no-config-download] [--config-download-only] [--setup-only] [--config-dir CONFIG_DIR] [--config-type CONFIG_TYPE] [--no-preserve-config] [--output-dir OUTPUT_DIR] [--override-ansible-cfg OVERRIDE_ANSIBLE_CFG] [--config-download-timeout CONFIG_DOWNLOAD_TIMEOUT] [--deployment-python-interpreter DEPLOYMENT_PYTHON_INTERPRETER] [-b [<baremetal_deployment.yaml>]] [--network-config] [--limit LIMIT] [--tags TAGS] [--skip-tags SKIP_TAGS] [--ansible-forks ANSIBLE_FORKS] [--disable-container-prepare] [--working-dir WORKING_DIR] [--heat-type {pod,container,native}] [--heat-container-api-image <HEAT_CONTAINER_API_IMAGE>] [--heat-container-engine-image <HEAT_CONTAINER_ENGINE_IMAGE>] [--rm-heat] [--skip-heat-pull] [--disable-protected-resource-types] [-y] [--allow-deprecated-network-data]", "openstack overcloud upgrade run [-h] --limit LIMIT [--playbook [PLAYBOOK ...]] [--static-inventory STATIC_INVENTORY] [--ssh-user SSH_USER] [--tags TAGS] [--skip-tags SKIP_TAGS] [--stack STACK] [-y] [--ansible-forks ANSIBLE_FORKS]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/overcloud
5.2.4. Creating the File System
5.2.4. Creating the File System The following command creates a GFS file system on the logical volume. The following commands mount the logical volume and report the file system disk space usage.
[ "gfs_mkfs -plock_nolock -j 1 /dev/striped_vol_group/striped_logical_volume This will destroy any data on /dev/striped_vol_group/striped_logical_volume. Are you sure you want to proceed? [y/n] y Device: /dev/striped_vol_group/striped_logical_volume Blocksize: 4096 Filesystem Size: 492484 Journals: 1 Resource Groups: 8 Locking Protocol: lock_nolock Lock Table: Syncing All Done", "mount /dev/striped_vol_group/striped_logical_volume /mnt df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 13902624 1656776 11528232 13% / /dev/hda1 101086 10787 85080 12% /boot tmpfs 127880 0 127880 0% /dev/shm /dev/striped_vol_group/striped_logical_volume 1969936 20 1969916 1% /mnt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/fs_create_ex2
Chapter 15. Browsing files on a network share
Chapter 15. Browsing files on a network share You can connect to a network share provided by a server and browse the files on the server like local files. You can download or upload files using the file browser. 15.1. GVFS URI format for network shares GNOME uses the GVFS URI format to refer to network shares and files on them. When you connect to a network share from GNOME, you provide the address to the network share in the following format. A URL, or uniform resource locator, is a form of address that refers to a location or file on a network. The address is formatted like this: The basic GVFS URI format takes the following syntax: The scheme specifies the protocol or type of server. The example.com portion of the address is called the domain name. If a username is required, it is inserted before the server name: You can also specify the user name or the port number to the network share: Table 15.1. Common network share protocols Protocol GVFS URI example SSH ssh://[email protected]/path NFS nfs://server/path Windows SMB smb://server/Share WebDAV dav://example.server.com/path Public FTP ftp://ftp.example.com/path Authenticated FTP ftp://[email protected]/path Additional resources The GVFS system The format of the GVFS URI string 15.2. Mounting a storage volume in GNOME You can manually mount a local storage volume or a network share in the Files application. Procedure Open the Files application. Click Other Locations in the side bar. The window lists all connected storage volumes and all network shares that are publicly available on your local area network. If you can see the volume or network share in this list, mount it by clicking the item. If you want to connect to a different network share, use the following steps. Enter the GVFS URI string to the network share in the Connect to Server field. Press Connect . If the dialog asks you for login credentials, enter your name and password into the relevant fields. When the mounting process finishes, you can browse the files on the volume or network share. 15.3. Unmounting a storage volume in GNOME You can unmount a storage volume, a network share, or another resource in the Files application. Procedure Open the Files application. In the side bar, click the Unmount (⏏) icon to the chosen mount. Wait until the mount disappears from the side bar or a notification about the safe removal appears. 15.4. Additional resources Managing storage volumes in GNOME Mounting NFS shares Mounting an SMB Share on Red Hat Enterprise Linux
[ "protocol :// server.example.com / folder/file", "protocol :// user @ server.example.com : port / folder/file" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/getting_started_with_the_gnome_desktop_environment/browsing-files-on-a-network-share_getting-started-with-the-gnome-desktop-environment
Chapter 5. AWS CloudWatch
Chapter 5. AWS CloudWatch Only producer is supported The AWS2 Cloudwatch component allows messages to be sent to an Amazon CloudWatch metrics. The implementation of the Amazon API is provided by the AWS SDK . Prerequisites You must have a valid Amazon Web Services developer account, and be signed up to use Amazon CloudWatch. More information is available at Amazon CloudWatch . 5.1. Dependencies When using aws2-cw with Red Hat build of Camel Spring Boot, add the following Maven dependency to your pom.xml to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-cw-starter</artifactId> </dependency> 5.2. URI Format The metrics will be created if they don't already exists. You can append query options to the URI in the following format, ?options=value&option2=value&... 5.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 5.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 5.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 5.4. Component Options The AWS CloudWatch component supports 18 options, which are listed below. Name Description Default Type amazonCwClient (producer) Autowired To use the AmazonCloudWatch as the client. CloudWatchClient configuration (producer) The component configuration. Cw2Configuration lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean name (producer) The metric name. String overrideEndpoint (producer) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean proxyHost (producer) To define a proxy host when instantiating the CW client. String proxyPort (producer) To define a proxy port when instantiating the CW client. Integer proxyProtocol (producer) To define a proxy protocol when instantiating the CW client. Enum values: HTTP HTTPS HTTPS Protocol region (producer) The region in which CW client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String timestamp (producer) The metric timestamp. Instant trustAllCertificates (producer) If we want to trust all certificates in case of overriding the endpoint. false boolean unit (producer) The metric unit. String uriEndpointOverride (producer) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (producer) Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false boolean value (producer) The metric value. Double autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String 5.5. Endpoint Options The AWS CloudWatch endpoint is configured using URI syntax: with the following path and query parameters: 5.5.1. Path Parameters (1 parameters) Name Description Default Type namespace (producer) Required The metric namespace. String 5.5.2. Query Parameters (16 parameters) Name Description Default Type amazonCwClient (producer) Autowired To use the AmazonCloudWatch as the client. CloudWatchClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean name (producer) The metric name. String overrideEndpoint (producer) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean proxyHost (producer) To define a proxy host when instantiating the CW client. String proxyPort (producer) To define a proxy port when instantiating the CW client. Integer proxyProtocol (producer) To define a proxy protocol when instantiating the CW client. Enum values: HTTP HTTPS HTTPS Protocol region (producer) The region in which CW client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String timestamp (producer) The metric timestamp. Instant trustAllCertificates (producer) If we want to trust all certificates in case of overriding the endpoint. false boolean unit (producer) The metric unit. String uriEndpointOverride (producer) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (producer) Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false boolean value (producer) The metric value. Double accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String Required CW component options You have to provide the amazonCwClient in the Registry or your accessKey and secretKey to access the Amazon's CloudWatch . 5.6. Usage 5.6.1. Static credentials vs Default Credential Provider You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true. Java system properties - aws.accessKeyId and aws.secretKey Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Web Identity Token from AWS STS. The shared credentials and config files. Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set. Amazon EC2 Instance profile credentials. For more information about this you can look at AWS credentials documentation 5.6.2. Message headers evaluated by the CW producer Header Type Description CamelAwsCwMetricName String The Amazon CW metric name. CamelAwsCwMetricValue Double The Amazon CW metric value. CamelAwsCwMetricUnit String The Amazon CW metric unit. CamelAwsCwMetricNamespace String The Amazon CW metric namespace. CamelAwsCwMetricTimestamp Date The Amazon CW metric timestamp. CamelAwsCwMetricDimensionName String The Amazon CW metric dimension name. CamelAwsCwMetricDimensionValue String The Amazon CW metric dimension value. CamelAwsCwMetricDimensions Map<String, String> A map of dimension names and dimension values. 5.6.3. Advanced CloudWatchClient configuration If you need more control over the CloudWatchClient instance configuration you can create your own instance and refer to it from the URI: from("direct:start") .to("aws2-cw://namespace?amazonCwClient=#client"); The #client refers to a CloudWatchClient in the Registry. 5.7. Examples 5.7.1. Producer Example from("direct:start") .to("aws2-cw://http://camel.apache.org/aws-cw"); and sends something like exchange.getIn().setHeader(Cw2Constants.METRIC_NAME, "ExchangesCompleted"); exchange.getIn().setHeader(Cw2Constants.METRIC_VALUE, "2.0"); exchange.getIn().setHeader(Cw2Constants.METRIC_UNIT, "Count"); 5.8. Spring Boot Auto-Configuration The component supports 19 options, which are listed below. Name Description Default Type camel.component.aws2-cw.access-key Amazon AWS Access Key. String camel.component.aws2-cw.amazon-cw-client To use the AmazonCloudWatch as the client. The option is a software.amazon.awssdk.services.cloudwatch.CloudWatchClient type. CloudWatchClient camel.component.aws2-cw.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.aws2-cw.configuration The component configuration. The option is a org.apache.camel.component.aws2.cw.Cw2Configuration type. Cw2Configuration camel.component.aws2-cw.enabled Whether to enable auto configuration of the aws2-cw component. This is enabled by default. Boolean camel.component.aws2-cw.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.aws2-cw.name The metric name. String camel.component.aws2-cw.override-endpoint Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false Boolean camel.component.aws2-cw.proxy-host To define a proxy host when instantiating the CW client. String camel.component.aws2-cw.proxy-port To define a proxy port when instantiating the CW client. Integer camel.component.aws2-cw.proxy-protocol To define a proxy protocol when instantiating the CW client. Protocol camel.component.aws2-cw.region The region in which CW client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String camel.component.aws2-cw.secret-key Amazon AWS Secret Key. String camel.component.aws2-cw.timestamp The metric timestamp. The option is a java.time.Instant type. Instant camel.component.aws2-cw.trust-all-certificates If we want to trust all certificates in case of overriding the endpoint. false Boolean camel.component.aws2-cw.unit The metric unit. String camel.component.aws2-cw.uri-endpoint-override Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String camel.component.aws2-cw.use-default-credentials-provider Set whether the S3 client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false Boolean camel.component.aws2-cw.value The metric value. Double
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-cw-starter</artifactId> </dependency>", "aws2-cw://namespace[?options]", "aws2-cw:namespace", "from(\"direct:start\") .to(\"aws2-cw://namespace?amazonCwClient=#client\");", "from(\"direct:start\") .to(\"aws2-cw://http://camel.apache.org/aws-cw\");", "exchange.getIn().setHeader(Cw2Constants.METRIC_NAME, \"ExchangesCompleted\"); exchange.getIn().setHeader(Cw2Constants.METRIC_VALUE, \"2.0\"); exchange.getIn().setHeader(Cw2Constants.METRIC_UNIT, \"Count\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-aws2-cw-component-starter
Chapter 15. Installing a cluster on AWS with remote workers on AWS Outposts
Chapter 15. Installing a cluster on AWS with remote workers on AWS Outposts In OpenShift Container Platform version 4.14, you can install a cluster on Amazon Web Services (AWS) with remote workers running in AWS Outposts. This can be achieved by customizing the default AWS installation and performing some manual steps. Important Installing a cluster on AWS with remote workers on AWS Outposts is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . For more info about AWS Outposts see AWS Outposts Documentation . Important In order to install a cluster with remote workers in AWS Outposts, all worker instances must be located within the same Outpost instance and cannot be located in an AWS region. It is not possible for the cluster to have instances in both AWS Outposts and AWS region. In addition, it also follows that control plane nodes mustn't be schedulable. 15.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. You are familiar with the instance types are supported in the AWS Outpost instance you use. This can be validated with get-outpost-instance-types AWS CLI command You are familiar with the AWS Outpost instance details, such as OutpostArn and AvailabilityZone. This can be validated with list-outposts AWS CLI command Important Since the cluster uses the provided AWS credentials to create AWS resources for its entire life cycle, the credentials must be key-based and long-term. So, If you have an AWS profile stored on your computer, it must not use a temporary session token, generated while using a multi-factor authentication device. For more information about generating the appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You may supply the keys when you run the installation program. You have access to an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). See the section "About using a custom VPC" for more information. If a firewall is used, it was configured to allow the sites that your cluster requires access to. 15.2. About using a custom VPC OpenShift Container Platform 4.14 installer cannot automatically deploy AWS Subnets on AWS Outposts, so you will need to manually configure the VPC. Therefore, you have to deploy the cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). In addition, by deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 15.2.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: Note To allow the creation of OpenShift Container Platform with remote workers in the AWS Outposts, you must create at least one private subnet in the AWS Outpost instance for the workload instances creation and one private subnet in an AWS region for the control plane instances creation. If you specify more than one private subnet in the region, the control plane instances will be distributed across these subnets. You will also need to create a public subnet in each of the availability zones used for private subnets, including the Outpost private subnet, as Network Load Balancers will be created in the AWS region for the API server and Ingress network as part of the cluster installation. It is possible to create an AWS region private subnet in the same Availability zone as an Outpost private subnet. Create a public and private subnet in the AWS Region for each availability zone that your control plane uses. Each availability zone can contain no more than one public and one private subnet in the AWS region. For an example of this type of configuration, see VPC with public and private subnets (NAT) in the AWS documentation. To create a private subnet in the AWS Outposts, you need to first ensure that the Outpost instance is located in the desired availability zone. Then, you can create the private subnet within that availability zone within the Outpost instance, by adding the Outpost ARN. Make sure there is another public subnet in the AWS Region created in the same availability zone. Record each subnet ID. Completing the installation requires that you enter all the subnets IDs, created in the AWS Region, in the platform section of the install-config.yaml file and changing the workers machineset to use the private subnet ID created in the Outpost. See Finding a subnet ID in the AWS documentation. Important In case you need to create a public subnet in the AWS Outposts, verify that this subnet is not used for the Network or Classic LoadBalancer, otherwise the LoadBalancer creation fails. To achieve that, the kubernetes.io/cluster/.*-outposts: owned special tag must be included in the subnet. The VPC's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The subnet CIDR blocks must belong to the machine CIDR that you specify. The VPC must have a public internet gateway attached to it. For each availability zone: The public subnet requires a route to the internet gateway. The public subnet requires a NAT gateway with an EIP address. The private subnet requires a route to the NAT gateway in public subnet. Note To access your local cluster over your local network, the VPC must be associated with your Outpost's local gateway route table. For more information, see VPC associations in the AWS Outposts User Guide. The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. To enable remote workers running in the Outpost, the VPC must include a private subnet located within the Outpost instance, in addition to the private subnets located within the corresponding AWS region. If you use private subnets, you must provide appropriate routes and tables for them. 15.2.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains exactly one public and one private subnet in the AWS region (not created in the Outpost instance). The availability zone in which the Outpost instance is installed should include one aditional private subnet in the Outpost instance. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 15.2.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 15.2.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 15.2.5. AWS security groups By default, the installation program creates and attaches security groups to control plane and compute machines. The rules associated with the default security groups cannot be modified. However, you can apply additional existing AWS security groups, which are associated with your existing VPC, to control plane and compute machines. Applying custom security groups can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. As part of the installation process, you apply custom security groups by modifying the install-config.yaml file before deploying the cluster. For more information, see "Applying existing AWS security groups to the cluster". 15.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 15.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 15.6. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 15.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 15.7. Identifying your AWS Outposts instance types AWS Outposts rack catalog includes options supporting the latest generation Intel powered EC2 instance types with or without local instance storage. Identify which instance types are configured in your AWS Outpost instance. As part of the installation process, you must update the install-config.yaml file with the instance type that the installation program will use to deploy worker nodes. Procedure Use the AWS CLI to get the list of supported instance types by running the following command: USD aws outposts get-outpost-instance-types --outpost-id <outpost_id> 1 1 For <outpost_id> , specify the Outpost ID, used in the AWS account for the worker instances Important When you purchase capacity for your AWS Outpost instance, you specify an EC2 capacity layout that each server provides. Each server supports a single family of instance types. A layout can offer a single instance type or multiple instance types. Dedicated Hosts allows you to alter whatever you chose for that initial layout. If you allocate a host to support a single instance type for the entire capacity, you can only start a single instance type from that host. Supported instance types in AWS Outposts might be changed. For more information, you can check the Compute and Storage page in AWS Outposts documents. 15.8. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. The AWS Outposts installation has the following limitations which require manual modification of the install-config.yaml file: Unlike AWS Regions, which offer near-infinite scale, AWS Outposts are limited by their provisioned capacity, EC2 family and generations, configured instance sizes, and availability of compute capacity that is not already consumed by other workloads. Therefore, when creating new OpenShift Container Platform cluster, you need to provide the supported instance type in the compute.platform.aws.type section in the configuration file. When deploying OpenShift Container Platform cluster with remote workers running in AWS Outposts, only one Availability Zone can be used for the compute instances - the Availability Zone in which the Outpost instance was created in. Therefore, when creating new OpenShift Container Platform cluster, it recommended to provide the relevant Availability Zone in the compute.platform.aws.zones section in the configuration file, in order to limit the compute instances to this Availability Zone. Amazon Elastic Block Store (EBS) gp3 volumes aren't supported by the AWS Outposts service. This volume type is the default type used by the OpenShift Container Platform cluster. Therefore, when creating new OpenShift Container Platform cluster, you must change the volume type in the compute.platform.aws.rootVolume.type section to gp2. You will find more information about how to change these values below. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 15.8.1. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: {} replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: aws: type: m5.large 8 zones: - us-east-1a 9 rootVolume: type: gp2 10 size: 120 replicas: 3 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 13 propagateUserTags: true 14 userTags: adminContact: jdoe costCenter: 7536 subnets: 15 - subnet-1 - subnet-2 - subnet-3 sshKey: ssh-ed25519 AAAA... 16 pullSecret: '{"auths": ...}' 17 1 11 13 17 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 6 14 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 8 For compute instances running in an AWS Outpost instance, specify a supported instance type in the AWS Outpost instance. 9 For compute instances running in AWS Outpost instance, specify the Availability Zone where the Outpost instance is located. 10 For compute instances running in AWS Outpost instance, specify volume type gp2, to avoid using gp3 volume type which is not supported. 12 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 15 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 16 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 15.8.2. Applying existing AWS security groups to the cluster Applying existing AWS security groups to your control plane and compute machines can help you meet the security needs of your organization, in such cases where you need to control the incoming or outgoing traffic of these machines. Prerequisites You have created the security groups in AWS. For more information, see the AWS documentation about working with security groups . The security groups must be associated with the existing VPC that you are deploying the cluster to. The security groups cannot be associated with another VPC. You have an existing install-config.yaml file. Procedure In the install-config.yaml file, edit the compute.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your compute machines. Edit the controlPlane.platform.aws.additionalSecurityGroupIDs parameter to specify one or more custom security groups for your control plane machines. Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies custom security groups # ... compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3 1 Specify the name of the security group as it appears in the Amazon EC2 console, including the sg prefix. 2 Specify subnets for each availability zone that your cluster uses. 15.9. Generating manifest files Use the installation program to generate a set of manifest files in the assets directory. Manifest files are required to specify the AWS Outposts subnets to use for worker machines, and to specify settings required by the network provider. If you plan to reuse the install-config.yaml file, create a backup file before you generate the manifest files. Procedure Optional: Create a backup copy of the install-config.yaml file: USD cp install-config.yaml install-config.yaml.backup Generate a set of manifests in your assets directory: USD openshift-install create manifests --dir <installation_-_directory> This command displays the following messages. Example output INFO Consuming Install Config from target directory INFO Manifests created in: <installation_directory>/manifests and <installation_directory>/openshift The command generates the following manifest files: Example output USD tree . ├── manifests │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_cloud-creds-secret.yaml ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-machines-0.yaml ├── 99_openshift-cluster-api_master-machines-1.yaml ├── 99_openshift-cluster-api_master-machines-2.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-machineset-0.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml ├── 99_role-cloud-creds-secret-reader.yaml └── openshift-install-manifests.yaml 15.9.1. Modifying manifest files Note The AWS Outposts environments has the following limitations which require manual modification in the manifest generated files: The maximum transmission unit (MTU) of a network connection is the size, in bytes, of the largest permissible packet that can be passed over the connection. The Outpost service link supports a maximum packet size of 1300 bytes. For more information about the service link, see Outpost connectivity to AWS Regions You will find more information about how to change these values below. Use Outpost Subnet for workers machineset Modify the following file: <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-0.yaml Find the subnet ID and replace it with the ID of the private subnet created in the Outpost. As a result, all the worker machines will be created in the Outpost. Specify MTU value for the Network Provider Outpost service links support a maximum packet size of 1300 bytes. It's required to modify the MTU of the Network Provider to follow this requirement. Create a new file under manifests directory, named cluster-network-03-config.yml If OpenShift SDN network provider is used, set the MTU value to 1250 apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: mtu: 1250 If OVN-Kubernetes network provider is used, set the MTU value to 1200 apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: mtu: 1200 15.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 15.11. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 15.11.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 15.11.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 15.11.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 15.1. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 15.2. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 15.11.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 15.11.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 15.11.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 15.11.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 15.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 15.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin /validating-an-installation.adoc 15.14. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. 15.15. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. See About remote health monitoring for more information about the Telemetry service. 15.16. Cluster Limitations Important Network Load Balancer (NLB) and Classic Load Balancer are not supported on AWS Outposts. After the cluster is created, all the Load Balancers are created in the AWS region. In order to use Load Balancers created inside the Outpost instances, Application Load Balancer should be used. The AWS Load Balancer Operator can be used in order to achieve that goal. If you want to use a public subnet located in the outpost instance for the ALB, you need to remove the special tag ( kubernetes.io/cluster/.*-outposts: owned ) that was added earlier during the VPC creation. This will prevent you from creating new Services of type LoadBalancer (Network Load Balancer). See Understanding the AWS Load Balancer Operator for more information Important Persistent storage using AWS Elastic Block Store limitations AWS Outposts does not support Amazon Elastic Block Store (EBS) gp3 volumes. After installation, the cluster includes two storage classes - gp3-csi and gp2-csi, with gp3-csi being the default storage class. It is important to always use gp2-csi. You can change the default storage class using the following OpenShift CLI (oc) commands: USD oc annotate --overwrite storageclass gp3-csi storageclass.kubernetes.io/is-default-class=false USD oc annotate --overwrite storageclass gp2-csi storageclass.kubernetes.io/is-default-class=true To create a Volume in the Outpost instance, the CSI driver determines the Outpost ARN based on the topology keys stored on the CSINode objects. To ensure that the CSI driver uses the correct topology values, it is necessary to use the WaitForConsumer volume binding mode and avoid setting allowed topologies on any new storage class created. 15.17. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "aws outposts get-outpost-instance-types --outpost-id <outpost_id> 1", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: {} replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: aws: type: m5.large 8 zones: - us-east-1a 9 rootVolume: type: gp2 10 size: 120 replicas: 3 metadata: name: test-cluster 11 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 12 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 13 propagateUserTags: true 14 userTags: adminContact: jdoe costCenter: 7536 subnets: 15 - subnet-1 - subnet-2 - subnet-3 sshKey: ssh-ed25519 AAAA... 16 pullSecret: '{\"auths\": ...}' 17", "compute: - hyperthreading: Enabled name: worker platform: aws: additionalSecurityGroupIDs: - sg-1 1 - sg-2 replicas: 3 controlPlane: hyperthreading: Enabled name: master platform: aws: additionalSecurityGroupIDs: - sg-3 - sg-4 replicas: 3 platform: aws: region: us-east-1 subnets: 2 - subnet-1 - subnet-2 - subnet-3", "cp install-config.yaml install-config.yaml.backup", "openshift-install create manifests --dir <installation_-_directory>", "INFO Consuming Install Config from target directory INFO Manifests created in: <installation_directory>/manifests and <installation_directory>/openshift", "tree . ├── manifests │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_cloud-creds-secret.yaml ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-machines-0.yaml ├── 99_openshift-cluster-api_master-machines-1.yaml ├── 99_openshift-cluster-api_master-machines-2.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-machineset-0.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml ├── 99_role-cloud-creds-secret-reader.yaml └── openshift-install-manifests.yaml", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: mtu: 1250", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: mtu: 1200", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None", "oc annotate --overwrite storageclass gp3-csi storageclass.kubernetes.io/is-default-class=false oc annotate --overwrite storageclass gp2-csi storageclass.kubernetes.io/is-default-class=true" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_aws/installing-aws-outposts-remote-workers
3.9. Creating a Virtual Machine
3.9. Creating a Virtual Machine This Ruby example creates a virtual machine. This example uses a hash with symbols and nested hashes as their values. Another method, more verbose, is to use the constructors of the corresponding objects directly. See Creating a Virtual Machine Instance with Attributes for more information. # Get the reference to the "vms" service: vms_service = connection.system_service.vms_service # Use the "add" method to create a new virtual machine: vms_service.add( OvirtSDK4::Vm.new( name: 'myvm', cluster: { name: 'mycluster' }, template: { name: 'Blank' } ) ) After creating a virtual machine, it is recommended to poll the virtual machine's status , to ensure that all the disks have been created. For more information, see http://www.rubydoc.info/gems/ovirt-engine-sdk/OvirtSDK4/VmsService:add .
[ "Get the reference to the \"vms\" service: vms_service = connection.system_service.vms_service Use the \"add\" method to create a new virtual machine: vms_service.add( OvirtSDK4::Vm.new( name: 'myvm', cluster: { name: 'mycluster' }, template: { name: 'Blank' } ) )" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/ruby_sdk_guide/creating_a_virtual_machine
7.142. openhpi32
7.142. openhpi32 7.142.1. RHBA-2015:1449 - openhpi32 bug fix and enhancement update Updated openhpi32 packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. OpenHPI is an open source project created with the intent of providing an implementation of the SA Forum's Hardware Platform Interface (HPI). HPI provides an abstracted interface to managing computer hardware, typically for chassis and rack based servers. HPI includes resource modeling; access to and control over sensor, control, watchdog, and inventory data associated with resources; abstracted System Event Log interfaces; hardware events and alerts; and a managed hot swap interface. Note The openhpi32 packages have been upgraded to upstream version 3.4.0, which provides a number of bug fixes and enhancements over the version. (BZ# 1127907 ) Bug Fixes BZ# 1127907 Encryption of the configuration file is now allowed, so authentication credentials for hardware management are no longer available in clear text on the system. Support for IPv6 has been fixed in the Onboard Administrator (OA) SOAP plug-in. The uid_map file is no longer created as world-writable. BZ# 1069015 Prior to this update, a data race condition was present in the Intelligent Platform Management Interface (IPMI) plug-in within the multi-threaded daemon. Consequently, the openhpid daemon could terminate unexpectedly with a segmentation fault. This bug has been fixed, the data structures are now updated in the correct order, and openhpid no longer crashes in this scenario. BZ# 1105679 Network timeouts were handled incorrectly in the openhpid daemon. As a consequence, network connections could fail when external plug-ins were used. With this update, handling of network socket timeouts has been improved in openhpid, and the described problem no longer occurs. Users of openhpi32 are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-openhpi32
3.6. Configuring Cluster Members
3.6. Configuring Cluster Members Configuring cluster members consists of initially configuring nodes in a newly configured cluster, adding members, and deleting members. The following sections provide procedures for initial configuration of nodes, adding nodes, and deleting nodes: Section 3.6.1, "Initially Configuring Members" Section 3.6.2, "Adding a Member to a Running Cluster" Section 3.6.3, "Deleting a Member from a Cluster" 3.6.1. Initially Configuring Members Creating a cluster consists of selecting a set of nodes (or members) to be part of the cluster. Once you have completed the initial step of creating a cluster and creating fence devices, you need to configure cluster nodes. To initially configure cluster nodes after creating a new cluster, follow the steps in this section. The starting point of the procedure is at the cluster-specific page that you navigate to from Choose a cluster to administer displayed on the cluster tab. At the detailed menu for the cluster (below the clusters menu), click Nodes . Clicking Nodes causes the display of an Add a Node element and a Configure element with a list of the nodes already configured in the cluster. Click a link for a node at either the list in the center of the page or in the list in the detailed menu under the clusters menu. Clicking a link for a node causes a page to be displayed for that link showing how that node is configured. At the bottom of the page, under Main Fencing Method , click Add a fence device to this level . Select a fence device and provide parameters for the fence device (for example port number). Note You can choose from an existing fence device or create a new fence device. Click Update main fence properties and wait for the change to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-config-member-conga-CA
Chapter 310. SMPP Component
Chapter 310. SMPP Component Available as of Camel version 2.2 This component provides access to an SMSC (Short Message Service Center) over the SMPP protocol to send and receive SMS. The JSMPP library is used for the protocol implementation. The Camel component currently operates as an ESME (External Short Messaging Entity) and not as an SMSC itself. Starting with*Camel 2.9* you are also able to execute ReplaceSm, QuerySm, SubmitMulti, CancelSm and DataSm. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-smpp</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 310.1. SMS limitations SMS is neither reliable or secure. Users who require reliable and secure delivery may want to consider using the XMPP or SIP components instead, combined with a smartphone app supporting the chosen protocol. Reliability: although the SMPP standard offers a range of feedback mechanisms to indicate errors, non-delivery and confirmation of delivery it is not uncommon for mobile networks to hide or simulate these responses. For example, some networks automatically send a delivery confirmation for every message even if the destination number is invalid or not switched on. Some networks silently drop messages if they think they are spam. Spam detection rules in the network may be very crude, sometimes more than 100 messages per day from a single sender may be considered spam. Security: there is basic encryption for the last hop from the radio tower down to the recipient handset. SMS messages are not encrypted or authenticated in any other part of the network. Some operators allow staff in retail outlets or call centres to browse through the SMS message histories of their customers. Message sender identity can be easily forged. Regulators and even the mobile telephone industry itself has cautioned against the use of SMS in two-factor authentication schemes and other purposes where security is important. While the Camel component makes it as easy as possible to send messages to the SMS network, it can not offer an easy solution to these problems. 310.2. Data coding, alphabet and international character sets Data coding and alphabet can be specified on a per-message basis. Default values can be specified for the endpoint. It is important to understand the relationship between these options and the way the component acts when more than one value is set. Data coding is an 8 bit field in the SMPP wire format. Alphabet corresponds to bits 0-3 of the data coding field. For some types of message, where a message class is used (by setting bit 5 of the data coding field), the lower two bits of the data coding field are not interpreted as alphabet and only bits 2 and 3 impact the alphabet. Furthermore, current version of the JSMPP library only seems to support bits 2 and 3, assuming that bits 0 and 1 are used for message class. This is why the Alphabet class in JSMPP doesn't support the value 3 (binary 0011) which indicates ISO-8859-1. Although JSMPP provides a representation of the message class parameter, the Camel component doesn't currently provide a way to set it other than manually setting the corresponding bits in the data coding field. When setting the data coding field in the outgoing message, the Camel component considers the following values and uses the first one it can find: the data coding specified in a header the alphabet specified in a header the data coding specified in the endpoint configuration (URI parameter) Older versions of Camel had bugs in support for international character sets. This feature only worked when a single encoding was used for all messages and was troublesome when users wanted to change it on a per-message basis. Users who require this to work should ensure their version of Camel includes the fix for JIRA Issues Macro: com.atlassian.sal.api.net.ResponseStatusException: Unexpected response received. Status code: 404 . In addition to trying to send the data coding value to the SMSC, the Camel component also tries to analyze the message body, convert it to a Java String (Unicode) and convert that to a byte array in the corresponding alphabet When deciding which alphabet to use in the byte array, the Camel SMPP component does not consider the data coding value (header or configuration), it only considers the specified alphabet (from either the header or endpoint parameter). If some characters in the String can't be represented in the chosen alphabet, they may be replaced by the question mark ( ? ) symbol. Users of the API may want to consider checking if their message body can be converted to ISO-8859-1 before passing it to the component and if not, setting the alphabet header to request UCS-2 encoding. If the alphabet and data coding options are not specified at all then the component may try to detect the required encoding and set the data coding for you. The list of alphabet codes are specified in the SMPP specification v3.4, section 5.2.19. One notable limitation of the SMPP specification is that there is no alphabet code for explicitly requesting use of the GSM 3.38 (7 bit) character set. Choosing the value 0 for the alphabet selects the SMSC default alphabet, this usually means GSM 3.38 but it is not guaranteed. The SMPP gateway Nexmo actually allows the default to be mapped to any other character set with a control panel option . It is suggested that users check with their SMSC operator to confirm exactly which character set is being used as the default. 310.3. Message splitting and throttling After transforming a message body from a String to a byte array, the Camel component is also responsible for splitting the message into parts (within the 140 byte SMS size limit) before passing it to JSMPP. This is completed automatically. If the GSM 3.38 alphabet is used, the component will pack up to 160 characters into the 140 byte message body. If an 8 bit character set is used (e.g. ISO-8859-1 for western Europe) then 140 characters will be allowed within the 140 byte message body. If 16 bit UCS-2 encoding is used then just 70 characters fit into each 140 byte message. Some SMSC providers implement throttling rules. Each part of a message that has been split may be counted separately by the provider's throttling mechanism. The Camel Throttler component can be useful for throttling messages in the SMPP route before handing them to the SMSC. 310.4. URI format smpp://[username@]hostname[:port][?options] smpps://[username@]hostname[:port][?options] If no username is provided, then Camel will provide the default value smppclient . If no port number is provided, then Camel will provide the default value 2775 . Camel 2.3: If the protocol name is "smpps", camel-smpp with try to use SSLSocket to init a connection to the server. You can append query options to the URI in the following format, ?option=value&option=value&... 310.5. URI Options The SMPP component supports 2 options, which are listed below. Name Description Default Type configuration (advanced) To use the shared SmppConfiguration as configuration. SmppConfiguration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The SMPP endpoint is configured using URI syntax: with the following path and query parameters: 310.5.1. Path Parameters (2 parameters): Name Description Default Type host Hostname for the SMSC server to use. localhost String port Port number for the SMSC server to use. 2775 Integer 310.5.2. Query Parameters (38 parameters): Name Description Default Type initialReconnectDelay (common) Defines the initial delay in milliseconds after the consumer/producer tries to reconnect to the SMSC, after the connection was lost. 5000 long maxReconnect (common) Defines the maximum number of attempts to reconnect to the SMSC, if SMSC returns a negative bind response 2147483647 int reconnectDelay (common) Defines the interval in milliseconds between the reconnect attempts, if the connection to the SMSC was lost and the was not succeed. 5000 long splittingPolicy (common) You can specify a policy for handling long messages: ALLOW - the default, long messages are split to 140 bytes per message TRUNCATE - long messages are split and only the first fragment will be sent to the SMSC. Some carriers drop subsequent fragments so this reduces load on the SMPP connection sending parts of a message that will never be delivered. REJECT - if a message would need to be split, it is rejected with an SMPP NegativeResponseException and the reason code signifying the message is too long. ALLOW SmppSplittingPolicy systemType (common) This parameter is used to categorize the type of ESME (External Short Message Entity) that is binding to the SMSC (max. 13 characters). cp String addressRange (consumer) You can specify the address range for the SmppConsumer as defined in section 5.2.7 of the SMPP 3.4 specification. The SmppConsumer will receive messages only from SMSC's which target an address (MSISDN or IP address) within this range. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern destAddr (producer) Defines the destination SME address. For mobile terminated messages, this is the directory number of the recipient MS. Only for SubmitSm, SubmitMulti, CancelSm and DataSm. 1717 String destAddrNpi (producer) Defines the type of number (TON) to be used in the SME destination address parameters. Only for SubmitSm, SubmitMulti, CancelSm and DataSm. The following NPI values are defined: 0: Unknown 1: ISDN (E163/E164) 2: Data (X.121) 3: Telex (F.69) 6: Land Mobile (E.212) 8: National 9: Private 10: ERMES 13: Internet (IP) 18: WAP Client Id (to be defined by WAP Forum) byte destAddrTon (producer) Defines the type of number (TON) to be used in the SME destination address parameters. Only for SubmitSm, SubmitMulti, CancelSm and DataSm. The following TON values are defined: 0: Unknown 1: International 2: National 3: Network Specific 4: Subscriber Number 5: Alphanumeric 6: Abbreviated byte lazySessionCreation (producer) Sessions can be lazily created to avoid exceptions, if the SMSC is not available when the Camel producer is started. Camel will check the in message headers 'CamelSmppSystemId' and 'CamelSmppPassword' of the first exchange. If they are present, Camel will use these data to connect to the SMSC. false boolean numberingPlanIndicator (producer) Defines the numeric plan indicator (NPI) to be used in the SME. The following NPI values are defined: 0: Unknown 1: ISDN (E163/E164) 2: Data (X.121) 3: Telex (F.69) 6: Land Mobile (E.212) 8: National 9: Private 10: ERMES 13: Internet (IP) 18: WAP Client Id (to be defined by WAP Forum) byte priorityFlag (producer) Allows the originating SME to assign a priority level to the short message. Only for SubmitSm and SubmitMulti. Four Priority Levels are supported: 0: Level 0 (lowest) priority 1: Level 1 priority 2: Level 2 priority 3: Level 3 (highest) priority byte protocolId (producer) The protocol id byte registeredDelivery (producer) Is used to request an SMSC delivery receipt and/or SME originated acknowledgements. The following values are defined: 0: No SMSC delivery receipt requested. 1: SMSC delivery receipt requested where final delivery outcome is success or failure. 2: SMSC delivery receipt requested where the final delivery outcome is delivery failure. byte replaceIfPresentFlag (producer) Used to request the SMSC to replace a previously submitted message, that is still pending delivery. The SMSC will replace an existing message provided that the source address, destination address and service type match the same fields in the new message. The following replace if present flag values are defined: 0: Don't replace 1: Replace byte serviceType (producer) The service type parameter can be used to indicate the SMS Application service associated with the message. The following generic service_types are defined: CMT: Cellular Messaging CPT: Cellular Paging VMN: Voice Mail Notification VMA: Voice Mail Alerting WAP: Wireless Application Protocol USSD: Unstructured Supplementary Services Data CMT String sourceAddr (producer) Defines the address of SME (Short Message Entity) which originated this message. 1616 String sourceAddrNpi (producer) Defines the numeric plan indicator (NPI) to be used in the SME originator address parameters. The following NPI values are defined: 0: Unknown 1: ISDN (E163/E164) 2: Data (X.121) 3: Telex (F.69) 6: Land Mobile (E.212) 8: National 9: Private 10: ERMES 13: Internet (IP) 18: WAP Client Id (to be defined by WAP Forum) byte sourceAddrTon (producer) Defines the type of number (TON) to be used in the SME originator address parameters. The following TON values are defined: 0: Unknown 1: International 2: National 3: Network Specific 4: Subscriber Number 5: Alphanumeric 6: Abbreviated byte typeOfNumber (producer) Defines the type of number (TON) to be used in the SME. The following TON values are defined: 0: Unknown 1: International 2: National 3: Network Specific 4: Subscriber Number 5: Alphanumeric 6: Abbreviated byte enquireLinkTimer (advanced) Defines the interval in milliseconds between the confidence checks. The confidence check is used to test the communication path between an ESME and an SMSC. 5000 Integer sessionStateListener (advanced) You can refer to a org.jsmpp.session.SessionStateListener in the Registry to receive callbacks when the session state changed. SessionStateListener synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean transactionTimer (advanced) Defines the maximum period of inactivity allowed after a transaction, after which an SMPP entity may assume that the session is no longer active. This timer may be active on either communicating SMPP entity (i.e. SMSC or ESME). 10000 Integer alphabet (codec) Defines encoding of data according the SMPP 3.4 specification, section 5.2.19. 0: SMSC Default Alphabet 4: 8 bit Alphabet 8: UCS2 Alphabet byte dataCoding (codec) Defines the data coding according the SMPP 3.4 specification, section 5.2.19. Example data encodings are: 0: SMSC Default Alphabet 3: Latin 1 (ISO-8859-1) 4: Octet unspecified (8-bit binary) 8: UCS2 (ISO/IEC-10646) 13: Extended Kanji JIS(X 0212-1990) byte encoding (codec) Defines the encoding scheme of the short message user data. Only for SubmitSm, ReplaceSm and SubmitMulti. ISO-8859-1 String httpProxyHost (proxy) If you need to tunnel SMPP through a HTTP proxy, set this attribute to the hostname or ip address of your HTTP proxy. String httpProxyPassword (proxy) If your HTTP proxy requires basic authentication, set this attribute to the password required for your HTTP proxy. String httpProxyPort (proxy) If you need to tunnel SMPP through a HTTP proxy, set this attribute to the port of your HTTP proxy. 3128 Integer httpProxyUsername (proxy) If your HTTP proxy requires basic authentication, set this attribute to the username required for your HTTP proxy. String proxyHeaders (proxy) These headers will be passed to the proxy server while establishing the connection. Map password (security) The password for connecting to SMSC server. String systemId (security) The system id (username) for connecting to SMSC server. smppclient String usingSSL (security) Whether using SSL with the smpps protocol false boolean 310.6. Spring Boot Auto-Configuration The component supports 38 options, which are listed below. Name Description Default Type camel.component.smpp.configuration.address-range You can specify the address range for the SmppConsumer as defined in section 5.2.7 of the SMPP 3.4 specification. The SmppConsumer will receive messages only from SMSC's which target an address (MSISDN or IP address) within this range. String camel.component.smpp.configuration.alphabet Defines encoding of data according the SMPP 3.4 specification, section 5.2.19. 0: SMSC Default Alphabet 4: 8 bit Alphabet 8: UCS2 Alphabet Byte camel.component.smpp.configuration.data-coding Defines the data coding according the SMPP 3.4 specification, section 5.2.19. Example data encodings are: 0: SMSC Default Alphabet 3: Latin 1 (ISO-8859-1) 4: Octet unspecified (8-bit binary) 8: UCS2 (ISO/IEC-10646) 13: Extended Kanji JIS(X 0212-1990) Byte camel.component.smpp.configuration.dest-addr Defines the destination SME address. For mobile terminated messages, this is the directory number of the recipient MS. Only for SubmitSm, SubmitMulti, CancelSm and DataSm. 1717 String camel.component.smpp.configuration.dest-addr-npi Defines the type of number (TON) to be used in the SME destination address parameters. Only for SubmitSm, SubmitMulti, CancelSm and DataSm. The following NPI values are defined: 0: Unknown 1: ISDN (E163/E164) 2: Data (X.121) 3: Telex (F.69) 6: Land Mobile (E.212) 8: National 9: Private 10: ERMES 13: Internet (IP) 18: WAP Client Id (to be defined by WAP Forum) Byte camel.component.smpp.configuration.dest-addr-ton Defines the type of number (TON) to be used in the SME destination address parameters. Only for SubmitSm, SubmitMulti, CancelSm and DataSm. The following TON values are defined: 0: Unknown 1: International 2: National 3: Network Specific 4: Subscriber Number 5: Alphanumeric 6: Abbreviated Byte camel.component.smpp.configuration.encoding Defines the encoding scheme of the short message user data. Only for SubmitSm, ReplaceSm and SubmitMulti. ISO-8859-1 String camel.component.smpp.configuration.enquire-link-timer Defines the interval in milliseconds between the confidence checks. The confidence check is used to test the communication path between an ESME and an SMSC. 5000 Integer camel.component.smpp.configuration.host Hostname for the SMSC server to use. localhost String camel.component.smpp.configuration.http-proxy-host If you need to tunnel SMPP through a HTTP proxy, set this attribute to the hostname or ip address of your HTTP proxy. String camel.component.smpp.configuration.http-proxy-password If your HTTP proxy requires basic authentication, set this attribute to the password required for your HTTP proxy. String camel.component.smpp.configuration.http-proxy-port If you need to tunnel SMPP through a HTTP proxy, set this attribute to the port of your HTTP proxy. 3128 Integer camel.component.smpp.configuration.http-proxy-username If your HTTP proxy requires basic authentication, set this attribute to the username required for your HTTP proxy. String camel.component.smpp.configuration.initial-reconnect-delay Defines the initial delay in milliseconds after the consumer/producer tries to reconnect to the SMSC, after the connection was lost. 5000 Long camel.component.smpp.configuration.lazy-session-creation Sessions can be lazily created to avoid exceptions, if the SMSC is not available when the Camel producer is started. Camel will check the in message headers 'CamelSmppSystemId' and 'CamelSmppPassword' of the first exchange. If they are present, Camel will use these data to connect to the SMSC. false Boolean camel.component.smpp.configuration.max-reconnect Defines the maximum number of attempts to reconnect to the SMSC, if SMSC returns a negative bind response 2147483647 Integer camel.component.smpp.configuration.numbering-plan-indicator Defines the numeric plan indicator (NPI) to be used in the SME. The following NPI values are defined: 0: Unknown 1: ISDN (E163/E164) 2: Data (X.121) 3: Telex (F.69) 6: Land Mobile (E.212) 8: National 9: Private 10: ERMES 13: Internet (IP) 18: WAP Client Id (to be defined by WAP Forum) Byte camel.component.smpp.configuration.password The password for connecting to SMSC server. String camel.component.smpp.configuration.port Port number for the SMSC server to use. 2775 Integer camel.component.smpp.configuration.priority-flag Allows the originating SME to assign a priority level to the short message. Only for SubmitSm and SubmitMulti. Four Priority Levels are supported: 0: Level 0 (lowest) priority 1: Level 1 priority 2: Level 2 priority 3: Level 3 (highest) priority Byte camel.component.smpp.configuration.protocol-id The protocol id Byte camel.component.smpp.configuration.proxy-headers These headers will be passed to the proxy server while establishing the connection. Map camel.component.smpp.configuration.reconnect-delay Defines the interval in milliseconds between the reconnect attempts, if the connection to the SMSC was lost and the was not succeed. 5000 Long camel.component.smpp.configuration.registered-delivery Is used to request an SMSC delivery receipt and/or SME originated acknowledgements. The following values are defined: 0: No SMSC delivery receipt requested. 1: SMSC delivery receipt requested where final delivery outcome is success or failure. 2: SMSC delivery receipt requested where the final delivery outcome is delivery failure. Byte camel.component.smpp.configuration.replace-if-present-flag Used to request the SMSC to replace a previously submitted message, that is still pending delivery. The SMSC will replace an existing message provided that the source address, destination address and service type match the same fields in the new message. The following replace if present flag values are defined: 0: Don't replace 1: Replace Byte camel.component.smpp.configuration.service-type The service type parameter can be used to indicate the SMS Application service associated with the message. The following generic service_types are defined: CMT: Cellular Messaging CPT: Cellular Paging VMN: Voice Mail Notification VMA: Voice Mail Alerting WAP: Wireless Application Protocol USSD: Unstructured Supplementary Services Data CMT String camel.component.smpp.configuration.session-state-listener You can refer to a org.jsmpp.session.SessionStateListener in the Registry to receive callbacks when the session state changed. SessionStateListener camel.component.smpp.configuration.source-addr Defines the address of SME (Short Message Entity) which originated this message. 1616 String camel.component.smpp.configuration.source-addr-npi Defines the numeric plan indicator (NPI) to be used in the SME originator address parameters. The following NPI values are defined: 0: Unknown 1: ISDN (E163/E164) 2: Data (X.121) 3: Telex (F.69) 6: Land Mobile (E.212) 8: National 9: Private 10: ERMES 13: Internet (IP) 18: WAP Client Id (to be defined by WAP Forum) Byte camel.component.smpp.configuration.source-addr-ton Defines the type of number (TON) to be used in the SME originator address parameters. The following TON values are defined: 0: Unknown 1: International 2: National 3: Network Specific 4: Subscriber Number 5: Alphanumeric 6: Abbreviated Byte camel.component.smpp.configuration.splitting-policy You can specify a policy for handling long messages: ALLOW - the default, long messages are split to 140 bytes per message TRUNCATE - long messages are split and only the first fragment will be sent to the SMSC. Some carriers drop subsequent fragments so this reduces load on the SMPP connection sending parts of a message that will never be delivered. REJECT - if a message would need to be split, it is rejected with an SMPP NegativeResponseException and the reason code signifying the message is too long. SmppSplittingPolicy camel.component.smpp.configuration.system-id The system id (username) for connecting to SMSC server. smppclient String camel.component.smpp.configuration.system-type This parameter is used to categorize the type of ESME (External Short Message Entity) that is binding to the SMSC (max. 13 characters). cp String camel.component.smpp.configuration.transaction-timer Defines the maximum period of inactivity allowed after a transaction, after which an SMPP entity may assume that the session is no longer active. This timer may be active on either communicating SMPP entity (i.e. SMSC or ESME). 10000 Integer camel.component.smpp.configuration.type-of-number Defines the type of number (TON) to be used in the SME. The following TON values are defined: 0: Unknown 1: International 2: National 3: Network Specific 4: Subscriber Number 5: Alphanumeric 6: Abbreviated Byte camel.component.smpp.configuration.using-s-s-l Whether using SSL with the smpps protocol false Boolean camel.component.smpp.enabled Enable smpp component true Boolean camel.component.smpp.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean You can have as many of these options as you like. smpp://smppclient@localhost:2775?password=password&enquireLinkTimer=3000&transactionTimer=5000&systemType=consumer 310.7. Producer Message Headers The following message headers can be used to affect the behavior of the SMPP producer Header Type Description CamelSmppDestAddr List / String only for SubmitSm, SubmitMulti, CancelSm and DataSm Defines the destination SME address(es). For mobile terminated messages, this is the directory number of the recipient MS. Is must be a List<String> for SubmitMulti and a String otherwise. CamelSmppDestAddrTon Byte only for SubmitSm, SubmitMulti, CancelSm and DataSm Defines the type of number (TON) to be used in the SME destination address parameters. Use the sourceAddrTon URI option values defined above. CamelSmppDestAddrNpi Byte only for SubmitSm, SubmitMulti, CancelSm and DataSm Defines the numeric plan indicator (NPI) to be used in the SME destination address parameters. Use the URI option sourceAddrNpi values defined above. CamelSmppSourceAddr String Defines the address of SME (Short Message Entity) which originated this message. CamelSmppSourceAddrTon Byte Defines the type of number (TON) to be used in the SME originator address parameters. Use the sourceAddrTon URI option values defined above. CamelSmppSourceAddrNpi Byte Defines the numeric plan indicator (NPI) to be used in the SME originator address parameters. Use the URI option sourceAddrNpi values defined above. CamelSmppServiceType String The service type parameter can be used to indicate the SMS Application service associated with the message. Use the URI option serviceType settings above. CamelSmppRegisteredDelivery Byte only for SubmitSm, ReplaceSm, SubmitMulti and DataSm Is used to request an SMSC delivery receipt and/or SME originated acknowledgements. Use the URI option registeredDelivery settings above. CamelSmppPriorityFlag Byte only for SubmitSm and SubmitMulti Allows the originating SME to assign a priority level to the short message. Use the URI option priorityFlag settings above. CamelSmppScheduleDeliveryTime Date only for SubmitSm, SubmitMulti and ReplaceSm This parameter specifies the scheduled time at which the message delivery should be first attempted. It defines either the absolute date and time or relative time from the current SMSC time at which delivery of this message will be attempted by the SMSC. It can be specified in either absolute time format or relative time format. The encoding of a time format is specified in chapter 7.1.1. in the smpp specification v3.4. CamelSmppValidityPeriod String / Date only for SubmitSm, SubmitMulti and ReplaceSm The validity period parameter indicates the SMSC expiration time, after which the message should be discarded if not delivered to the destination. If it's provided as Date , it's interpreted as absolute time. Camel 2.9.1 onwards: It can be defined in absolute time format or relative time format if you provide it as String as specified in chapter 7.1.1 in the smpp specification v3.4. CamelSmppReplaceIfPresentFlag Byte only for SubmitSm and SubmitMulti The replace if present flag parameter is used to request the SMSC to replace a previously submitted message, that is still pending delivery. The SMSC will replace an existing message provided that the source address, destination address and service type match the same fields in the new message. The following values are defined: 0 , Don't replace and 1 , Replace CamelSmppAlphabet / CamelSmppDataCoding Byte Camel 2.5 For SubmitSm, SubmitMulti and ReplaceSm (Prior to Camel 2.9 use CamelSmppDataCoding instead of CamelSmppAlphabet .) The data coding according to the SMPP 3.4 specification, section 5.2.19. Use the URI option alphabet settings above. CamelSmppOptionalParameters Map<String, String> Deprecated and will be removed in Camel 2.13.0/3.0.0 Camel 2.10.5 and 2.11.1 onwards and only for SubmitSm, SubmitMulti and DataSm The optional parameters send back by the SMSC. CamelSmppOptionalParameter Map<Short, Object> Camel 2.10.7 and 2.11.2 onwards and only for SubmitSm, SubmitMulti and DataSm The optional parameter which are send to the SMSC. The value is converted in the following way: String org.jsmpp.bean.OptionalParameter.COctetString , byte[] org.jsmpp.bean.OptionalParameter.OctetString , Byte org.jsmpp.bean.OptionalParameter.Byte , Integer org.jsmpp.bean.OptionalParameter.Int , Short org.jsmpp.bean.OptionalParameter.Short , null org.jsmpp.bean.OptionalParameter.Null CamelSmppEncoding String Camel 2.14.1 and Camel 2.15.0 onwards and *only for SubmitSm, SubmitMulti and DataSm*. Specifies the encoding (character set name) of the bytes in the message body. If the message body is a string then this is not relevant because Java Strings are always Unicode. If the body is a byte array then this header can be used to indicate that it is ISO-8859-1 or some other value. Default value is specified by the endpoint configuration parameter encoding CamelSmppSplittingPolicy String Camel 2.14.1 and Camel 2.15.0 onwards and *only for SubmitSm, SubmitMulti and DataSm*. Specifies the policy for message splitting for this exchange. Possible values are described in the endpoint configuration parameter splittingPolicy The following message headers are used by the SMPP producer to set the response from the SMSC in the message header Header Type Description CamelSmppId List<String> / String The id to identify the submitted short message(s) for later use. From Camel 2.9.0 : In case of a ReplaceSm, QuerySm, CancelSm and DataSm this header vaule is a String . In case of a SubmitSm or SubmitMultiSm this header vaule is a List<String> . CamelSmppSentMessageCount Integer From Camel 2.9 onwards only for SubmitSm and SubmitMultiSm The total number of messages which has been sent. CamelSmppError Map<String, List<Map<String, Object>>> From Camel 2.9 onwards only for SubmitMultiSm The errors which occurred by sending the short message(s) the form Map<String, List<Map<String, Object>>> (messageID : (destAddr : address, error : errorCode)). CamelSmppOptionalParameters Map<String, String> Deprecated and will be removed in Camel 2.13.0/3.0.0 From Camel 2.11.1 onwards only for DataSm The optional parameters which are returned from the SMSC by sending the message. CamelSmppOptionalParameter Map<Short, Object> From Camel 2.10.7, 2.11.2 onwards only for DataSm The optional parameter which are returned from the SMSC by sending the message. The key is the Short code for the optional parameter. The value is converted in the following way: org.jsmpp.bean.OptionalParameter.COctetString String , org.jsmpp.bean.OptionalParameter.OctetString byte[] , org.jsmpp.bean.OptionalParameter.Byte Byte , org.jsmpp.bean.OptionalParameter.Int Integer , org.jsmpp.bean.OptionalParameter.Short Short , org.jsmpp.bean.OptionalParameter.Null null 310.8. Consumer Message Headers The following message headers are used by the SMPP consumer to set the request data from the SMSC in the message header Header Type Description CamelSmppSequenceNumber Integer only for AlertNotification, DeliverSm and DataSm A sequence number allows a response PDU to be correlated with a request PDU. The associated SMPP response PDU must preserve this field. CamelSmppCommandId Integer only for AlertNotification, DeliverSm and DataSm The command id field identifies the particular SMPP PDU. For the complete list of defined values see chapter 5.1.2.1 in the smpp specification v3.4. CamelSmppSourceAddr String only for AlertNotification, DeliverSm and DataSm Defines the address of SME (Short Message Entity) which originated this message. CamelSmppSourceAddrNpi Byte only for AlertNotification and DataSm Defines the numeric plan indicator (NPI) to be used in the SME originator address parameters. Use the URI option sourceAddrNpi values defined above. CamelSmppSourceAddrTon Byte only for AlertNotification and DataSm Defines the type of number (TON) to be used in the SME originator address parameters. Use the sourceAddrTon URI option values defined above. CamelSmppEsmeAddr String only for AlertNotification Defines the destination ESME address. For mobile terminated messages, this is the directory number of the recipient MS. CamelSmppEsmeAddrNpi Byte only for AlertNotification Defines the numeric plan indicator (NPI) to be used in the ESME originator address parameters. Use the URI option sourceAddrNpi values defined above. CamelSmppEsmeAddrTon Byte only for AlertNotification Defines the type of number (TON) to be used in the ESME originator address parameters. Use the sourceAddrTon URI option values defined above. CamelSmppId String only for smsc DeliveryReceipt and DataSm The message ID allocated to the message by the SMSC when originally submitted. CamelSmppDelivered Integer only for smsc DeliveryReceipt Number of short messages delivered. This is only relevant where the original message was submitted to a distribution list.The value is padded with leading zeros if necessary. CamelSmppDoneDate Date only for smsc DeliveryReceipt The time and date at which the short message reached it's final state. The format is as follows: YYMMDDhhmm. CamelSmppStatus DeliveryReceiptState only for smsc DeliveryReceipt: The final status of the message. The following values are defined: DELIVRD : Message is delivered to destination, EXPIRED : Message validity period has expired, DELETED : Message has been deleted, UNDELIV : Message is undeliverable, ACCEPTD : Message is in accepted state (i.e. has been manually read on behalf of the subscriber by customer service), UNKNOWN : Message is in invalid state, REJECTD : Message is in a rejected state CamelSmppCommandStatus Integer only for DataSm The Command status of the message. CamelSmppError String only for smsc DeliveryReceipt Where appropriate this may hold a Network specific error code or an SMSC error code for the attempted delivery of the message. These errors are Network or SMSC specific and are not included here. CamelSmppSubmitDate Date only for smsc DeliveryReceipt The time and date at which the short message was submitted. In the case of a message which has been replaced, this is the date that the original message was replaced. The format is as follows: YYMMDDhhmm. CamelSmppSubmitted Integer only for smsc DeliveryReceipt Number of short messages originally submitted. This is only relevant when the original message was submitted to a distribution list.The value is padded with leading zeros if necessary. CamelSmppDestAddr String only for DeliverSm and DataSm: Defines the destination SME address. For mobile terminated messages, this is the directory number of the recipient MS. CamelSmppScheduleDeliveryTime String only for DeliverSm: This parameter specifies the scheduled time at which the message delivery should be first attempted. It defines either the absolute date and time or relative time from the current SMSC time at which delivery of this message will be attempted by the SMSC. It can be specified in either absolute time format or relative time format. The encoding of a time format is specified in Section 7.1.1. in the smpp specification v3.4. CamelSmppValidityPeriod String only for DeliverSm The validity period parameter indicates the SMSC expiration time, after which the message should be discarded if not delivered to the destination. It can be defined in absolute time format or relative time format. The encoding of absolute and relative time format is specified in Section 7.1.1 in the smpp specification v3.4. CamelSmppServiceType String only for DeliverSm and DataSm The service type parameter indicates the SMS Application service associated with the message. CamelSmppRegisteredDelivery Byte only for DataSm Is used to request an delivery receipt and/or SME originated acknowledgements. Same values as in Producer header list above. CamelSmppDestAddrNpi Byte only for DataSm Defines the numeric plan indicator (NPI) in the destination address parameters. Use the URI option sourceAddrNpi values defined above. CamelSmppDestAddrTon Byte only for DataSm Defines the type of number (TON) in the destination address parameters. Use the sourceAddrTon URI option values defined above. CamelSmppMessageType String Camel 2.6 onwards : Identifies the type of an incoming message: AlertNotification : an SMSC alert notification, DataSm : an SMSC data short message, DeliveryReceipt : an SMSC delivery receipt, DeliverSm : an SMSC deliver short message CamelSmppOptionalParameters Map<String, Object> Deprecated and will be removed in Camel 2.13.0/3.0.0 Camel 2.10.5 onwards and only for DeliverSm The optional parameters send back by the SMSC. CamelSmppOptionalParameter Map<Short, Object> Camel 2.10.7, 2.11.2 onwards and only for DeliverSm The optional parameters send back by the SMSC. The key is the Short code for the optional parameter. The value is converted in the following way: org.jsmpp.bean.OptionalParameter.COctetString String , org.jsmpp.bean.OptionalParameter.OctetString byte[] , org.jsmpp.bean.OptionalParameter.Byte Byte , org.jsmpp.bean.OptionalParameter.Int Integer , org.jsmpp.bean.OptionalParameter.Short Short , org.jsmpp.bean.OptionalParameter.Null null Tip JSMPP library See the documentation of the JSMPP Library for more details about the underlying library. 310.9. Exception handling This component supports the general Camel exception handling capabilities When an error occurs sending a message with SubmitSm (the default action), the org.apache.camel.component.smpp.SmppException is thrown with a nested exception, org.jsmpp.extra.NegativeResponseException. Call NegativeResponseException.getCommandStatus() to obtain the exact SMPP negative response code, the values are explained in the SMPP specification 3.4, section 5.1.3. Camel 2.8 onwards : When the SMPP consumer receives a DeliverSm or DataSm short message and the processing of these messages fails, you can also throw a ProcessRequestException instead of handle the failure. In this case, this exception is forwarded to the underlying JSMPP library which will return the included error code to the SMSC. This feature is useful to e.g. instruct the SMSC to resend the short message at a later time. This could be done with the following lines of code: from("smpp://smppclient@localhost:2775?password=password&enquireLinkTimer=3000&transactionTimer=5000&systemType=consumer") .doTry() .to("bean:dao?method=updateSmsState") .doCatch(Exception.class) .throwException(new ProcessRequestException("update of sms state failed", 100)) .end(); Please refer to the SMPP specification for the complete list of error codes and their meanings. 310.10. Samples A route which sends an SMS using the Java DSL: from("direct:start") .to("smpp://smppclient@localhost:2775? password=password&enquireLinkTimer=3000&transactionTimer=5000&systemType=producer"); A route which sends an SMS using the Spring XML DSL: <route> <from uri="direct:start"/> <to uri="smpp://smppclient@localhost:2775? password=password&amp;enquireLinkTimer=3000&amp;transactionTimer=5000&amp;systemType=producer"/> </route> A route which receives an SMS using the Java DSL: from("smpp://smppclient@localhost:2775?password=password&enquireLinkTimer=3000&transactionTimer=5000&systemType=consumer") .to("bean:foo"); A route which receives an SMS using the Spring XML DSL: <route> <from uri="smpp://smppclient@localhost:2775? password=password&amp;enquireLinkTimer=3000&amp;transactionTimer=5000&amp;systemType=consumer"/> <to uri="bean:foo"/> </route> Tip SMSC simulator If you need an SMSC simulator for your test, you can use the simulator provided by Logica . 310.11. Debug logging This component has log level DEBUG , which can be helpful in debugging problems. If you use log4j, you can add the following line to your configuration: log4j.logger.org.apache.camel.component.smpp=DEBUG 310.12. See Also Configuring Camel Component Endpoint Getting Started
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-smpp</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "smpp://[username@]hostname[:port][?options] smpps://[username@]hostname[:port][?options]", "smpp:host:port", "smpp://smppclient@localhost:2775?password=password&enquireLinkTimer=3000&transactionTimer=5000&systemType=consumer", "from(\"smpp://smppclient@localhost:2775?password=password&enquireLinkTimer=3000&transactionTimer=5000&systemType=consumer\") .doTry() .to(\"bean:dao?method=updateSmsState\") .doCatch(Exception.class) .throwException(new ProcessRequestException(\"update of sms state failed\", 100)) .end();", "from(\"direct:start\") .to(\"smpp://smppclient@localhost:2775? password=password&enquireLinkTimer=3000&transactionTimer=5000&systemType=producer\");", "<route> <from uri=\"direct:start\"/> <to uri=\"smpp://smppclient@localhost:2775? password=password&amp;enquireLinkTimer=3000&amp;transactionTimer=5000&amp;systemType=producer\"/> </route>", "from(\"smpp://smppclient@localhost:2775?password=password&enquireLinkTimer=3000&transactionTimer=5000&systemType=consumer\") .to(\"bean:foo\");", "<route> <from uri=\"smpp://smppclient@localhost:2775? password=password&amp;enquireLinkTimer=3000&amp;transactionTimer=5000&amp;systemType=consumer\"/> <to uri=\"bean:foo\"/> </route>", "log4j.logger.org.apache.camel.component.smpp=DEBUG" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/smpp-component
function::pn
function::pn Name function::pn - Returns the active probe name Synopsis Arguments None Description This function returns the script-level probe point associated with a currently running probe handler, including wild-card expansion effects. Context: The current probe point.
[ "pn:string()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-pn
B.103.3. RHSA-2011:0369 - Moderate: wireshark security update
B.103.3. RHSA-2011:0369 - Moderate: wireshark security update Updated wireshark packages that fix multiple security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Wireshark is a program for monitoring network traffic. Wireshark was previously known as Ethereal. CVE-2011-0444 A heap-based buffer overflow flaw was found in the Wireshark MAC-LTE dissector. If Wireshark read a malformed packet off a network or opened a malicious dump file, it could crash or, possibly, execute arbitrary code as the user running Wireshark. CVE-2011-0713 A heap-based buffer overflow flaw was found in the way Wireshark processed signaling traces generated by the Gammu utility on Nokia DCT3 phones running in Netmonitor mode. If Wireshark opened a specially-crafted capture file, it could crash or, possibly, execute arbitrary code as the user running Wireshark. CVE-2011-0538 , CVE-2011-1139 , CVE-2011-1140 , CVE-2011-1141 Several denial of service flaws were found in Wireshark. Wireshark could crash or stop responding if it read a malformed packet off a network, or opened a malicious dump file. Users of Wireshark should upgrade to these updated packages, which contain Wireshark version 1.2.15, and resolve these issues. All running instances of Wireshark must be restarted for the update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2011-0369
Chapter 5. Managing images
Chapter 5. Managing images 5.1. Managing images overview With OpenShift Container Platform you can interact with images and set up image streams, depending on where the registries of the images are located, any authentication requirements around those registries, and how you want your builds and deployments to behave. 5.1.1. Images overview An image stream comprises any number of container images identified by tags. It presents a single virtual view of related images, similar to a container image repository. By watching an image stream, builds and deployments can receive notifications when new images are added or modified and react by performing a build or deployment, respectively. 5.2. Tagging images The following sections provide an overview and instructions for using image tags in the context of container images for working with OpenShift Container Platform image streams and their tags. 5.2.1. Image tags An image tag is a label applied to a container image in a repository that distinguishes a specific image from other images in an image stream. Typically, the tag represents a version number of some sort. For example, here :v3.11.59-2 is the tag: registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2 You can add additional tags to an image. For example, an image might be assigned the tags :v3.11.59-2 and :latest . OpenShift Container Platform provides the oc tag command, which is similar to the docker tag command, but operates on image streams instead of directly on images. 5.2.2. Image tag conventions Images evolve over time and their tags reflect this. Generally, an image tag always points to the latest image built. If there is too much information embedded in a tag name, like v2.0.1-may-2019 , the tag points to just one revision of an image and is never updated. Using default image pruning options, such an image is never removed. In very large clusters, the schema of creating new tags for every revised image could eventually fill up the etcd datastore with excess tag metadata for images that are long outdated. If the tag is named v2.0 , image revisions are more likely. This results in longer tag history and, therefore, the image pruner is more likely to remove old and unused images. Although tag naming convention is up to you, here are a few examples in the format <image_name>:<image_tag> : Table 5.1. Image tag naming conventions Description Example Revision myimage:v2.0.1 Architecture myimage:v2.0-x86_64 Base image myimage:v1.2-centos7 Latest (potentially unstable) myimage:latest Latest stable myimage:stable If you require dates in tag names, periodically inspect old and unsupported images and istags and remove them. Otherwise, you can experience increasing resource usage caused by retaining old images. 5.2.3. Adding tags to image streams An image stream in OpenShift Container Platform comprises zero or more container images identified by tags. There are different types of tags available. The default behavior uses a permanent tag, which points to a specific image in time. If the permanent tag is in use and the source changes, the tag does not change for the destination. A tracking tag means the destination tag's metadata is updated during the import of the source tag. Procedure You can add tags to an image stream using the oc tag command: USD oc tag <source> <destination> For example, to configure the ruby image stream static-2.0 tag to always refer to the current image for the ruby image stream 2.0 tag: USD oc tag ruby:2.0 ruby:static-2.0 This creates a new image stream tag named static-2.0 in the ruby image stream. The new tag directly references the image id that the ruby:2.0 image stream tag pointed to at the time oc tag was run, and the image it points to never changes. To ensure the destination tag is updated when the source tag changes, use the --alias=true flag: USD oc tag --alias=true <source> <destination> Note Use a tracking tag for creating permanent aliases, for example, latest or stable . The tag only works correctly within a single image stream. Trying to create a cross-image stream alias produces an error. You can also add the --scheduled=true flag to have the destination tag be refreshed, or re-imported, periodically. The period is configured globally at the system level. The --reference flag creates an image stream tag that is not imported. The tag points to the source location, permanently. If you want to instruct OpenShift Container Platform to always fetch the tagged image from the integrated registry, use --reference-policy=local . The registry uses the pull-through feature to serve the image to the client. By default, the image blobs are mirrored locally by the registry. As a result, they can be pulled more quickly the time they are needed. The flag also allows for pulling from insecure registries without a need to supply --insecure-registry to the container runtime as long as the image stream has an insecure annotation or the tag has an insecure import policy. 5.2.4. Removing tags from image streams You can remove tags from an image stream. Procedure To remove a tag completely from an image stream run: USD oc delete istag/ruby:latest or: USD oc tag -d ruby:latest 5.2.5. Referencing images in imagestreams You can use tags to reference images in image streams using the following reference types. Table 5.2. Imagestream reference types Reference type Description ImageStreamTag An ImageStreamTag is used to reference or retrieve an image for a given image stream and tag. ImageStreamImage An ImageStreamImage is used to reference or retrieve an image for a given image stream and image sha ID. DockerImage A DockerImage is used to reference or retrieve an image for a given external registry. It uses standard Docker pull specification for its name. When viewing example image stream definitions you may notice they contain definitions of ImageStreamTag and references to DockerImage , but nothing related to ImageStreamImage . This is because the ImageStreamImage objects are automatically created in OpenShift Container Platform when you import or tag an image into the image stream. You should never have to explicitly define an ImageStreamImage object in any image stream definition that you use to create image streams. Procedure To reference an image for a given image stream and tag, use ImageStreamTag : To reference an image for a given image stream and image sha ID, use ImageStreamImage : The <id> is an immutable identifier for a specific image, also called a digest. To reference or retrieve an image for a given external registry, use DockerImage : Note When no tag is specified, it is assumed the latest tag is used. You can also reference a third-party registry: Or an image with a digest: 5.3. Image pull policy Each container in a pod has a container image. After you have created an image and pushed it to a registry, you can then refer to it in the pod. 5.3.1. Image pull policy overview When OpenShift Container Platform creates containers, it uses the container imagePullPolicy to determine if the image should be pulled prior to starting the container. There are three possible values for imagePullPolicy : Table 5.3. imagePullPolicy values Value Description Always Always pull the image. IfNotPresent Only pull the image if it does not already exist on the node. Never Never pull the image. If a container imagePullPolicy parameter is not specified, OpenShift Container Platform sets it based on the image tag: If the tag is latest , OpenShift Container Platform defaults imagePullPolicy to Always . Otherwise, OpenShift Container Platform defaults imagePullPolicy to IfNotPresent . 5.4. Using image pull secrets If you are using the OpenShift Container Platform internal registry and are pulling from image streams located in the same project, then your pod service account should already have the correct permissions and no additional action should be required. However, for other scenarios, such as referencing images across OpenShift Container Platform projects or from secured registries, then additional configuration steps are required. You can obtain the image pull secret from the Red Hat OpenShift Cluster Manager . This pull secret is called pullSecret . You use this pull secret to authenticate with the services that are provided by the included authorities, Quay.io and registry.redhat.io , which serve the container images for OpenShift Container Platform components. 5.4.1. Allowing pods to reference images across projects When using the internal registry, to allow pods in project-a to reference images in project-b , a service account in project-a must be bound to the system:image-puller role in project-b . Note When you create a pod service account or a namespace, wait until the service account is provisioned with a docker pull secret; if you create a pod before its service account is fully provisioned, the pod fails to access the OpenShift Container Platform internal registry. Procedure To allow pods in project-a to reference images in project-b , bind a service account in project-a to the system:image-puller role in project-b : USD oc policy add-role-to-user \ system:image-puller system:serviceaccount:project-a:default \ --namespace=project-b After adding that role, the pods in project-a that reference the default service account are able to pull images from project-b . To allow access for any service account in project-a , use the group: USD oc policy add-role-to-group \ system:image-puller system:serviceaccounts:project-a \ --namespace=project-b 5.4.2. Allowing pods to reference images from other secured registries The .dockercfg USDHOME/.docker/config.json file for Docker clients is a Docker credentials file that stores your authentication information if you have previously logged into a secured or insecure registry. To pull a secured container image that is not from OpenShift Container Platform's internal registry, you must create a pull secret from your Docker credentials and add it to your service account. The Docker credentials file and the associated pull secret can contain multiple references to the same registry, each with its own set of credentials. Example config.json file { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io/repository-main":{ "auth":"b3Blb=", "email":"[email protected]" } } } Example pull secret apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: "2021-09-09T19:10:11Z" name: pull-secret namespace: default resourceVersion: "37676" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque Procedure If you already have a .dockercfg file for the secured registry, you can create a secret from that file by running: USD oc create secret generic <pull_secret_name> \ --from-file=.dockercfg=<path/to/.dockercfg> \ --type=kubernetes.io/dockercfg Or if you have a USDHOME/.docker/config.json file: USD oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson If you do not already have a Docker credentials file for the secured registry, you can create a secret by running: USD oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<user_name> \ --docker-password=<password> \ --docker-email=<email> To use a secret for pulling images for pods, you must add the secret to your service account. The name of the service account in this example should match the name of the service account the pod uses. The default service account is default : USD oc secrets link default <pull_secret_name> --for=pull 5.4.2.1. Pulling from private registries with delegated authentication A private registry can delegate authentication to a separate service. In these cases, image pull secrets must be defined for both the authentication and registry endpoints. Procedure Create a secret for the delegated authentication server: USD oc create secret docker-registry \ --docker-server=sso.redhat.com \ [email protected] \ --docker-password=******** \ --docker-email=unused \ redhat-connect-sso secret/redhat-connect-sso Create a secret for the private registry: USD oc create secret docker-registry \ --docker-server=privateregistry.example.com \ [email protected] \ --docker-password=******** \ --docker-email=unused \ private-registry secret/private-registry 5.4.3. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. Important To transfer your cluster to another owner, you must first initiate the transfer in OpenShift Cluster Manager , and then update the pull secret on the cluster. Updating a cluster's pull secret without initiating the transfer in OpenShift Cluster Manager causes the cluster to stop reporting Telemetry metrics in OpenShift Cluster Manager. For more information about transferring cluster ownership , see "Transferring cluster ownership" in the Red Hat OpenShift Cluster Manager documentation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config --template='{{index .data ".dockerconfigjson" | base64decode}}' ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot.
[ "registry.access.redhat.com/openshift3/jenkins-2-rhel7:v3.11.59-2", "oc tag <source> <destination>", "oc tag ruby:2.0 ruby:static-2.0", "oc tag --alias=true <source> <destination>", "oc delete istag/ruby:latest", "oc tag -d ruby:latest", "<image_stream_name>:<tag>", "<image_stream_name>@<id>", "openshift/ruby-20-centos7:2.0", "registry.redhat.io/rhel7:latest", "centos/ruby-22-centos7@sha256:3a335d7d8a452970c5b4054ad7118ff134b3a6b50a2bb6d0c07c746e8986b28e", "oc policy add-role-to-user system:image-puller system:serviceaccount:project-a:default --namespace=project-b", "oc policy add-role-to-group system:image-puller system:serviceaccounts:project-a --namespace=project-b", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io/repository-main\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "apiVersion: v1 data: .dockerconfigjson: ewogICAiYXV0aHMiOnsKICAgICAgIm0iOnsKICAgICAgIsKICAgICAgICAgImF1dGgiOiJiM0JsYj0iLAogICAgICAgICAiZW1haWwiOiJ5b3VAZXhhbXBsZS5jb20iCiAgICAgIH0KICAgfQp9Cg== kind: Secret metadata: creationTimestamp: \"2021-09-09T19:10:11Z\" name: pull-secret namespace: default resourceVersion: \"37676\" uid: e2851531-01bc-48ba-878c-de96cfe31020 type: Opaque", "oc create secret generic <pull_secret_name> --from-file=.dockercfg=<path/to/.dockercfg> --type=kubernetes.io/dockercfg", "oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password> --docker-email=<email>", "oc secrets link default <pull_secret_name> --for=pull", "oc create secret docker-registry --docker-server=sso.redhat.com [email protected] --docker-password=******** --docker-email=unused redhat-connect-sso secret/redhat-connect-sso", "oc create secret docker-registry --docker-server=privateregistry.example.com [email protected] --docker-password=******** --docker-email=unused private-registry secret/private-registry", "oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1", "oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3", "oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/images/managing-images
Chapter 6. Installing a cluster on AWS in a restricted network
Chapter 6. Installing a cluster on AWS in a restricted network In OpenShift Container Platform version 4.14, you can install a cluster on Amazon Web Services (AWS) in a restricted network by creating an internal mirror of the installation release content on an existing Amazon Virtual Private Cloud (VPC). 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in AWS. When installing to a restricted network using installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) in the AWS documentation. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. 6.2. About installations in restricted networks In OpenShift Container Platform 4.14, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 6.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 6.3. About using a custom VPC In OpenShift Container Platform 4.14, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 6.3.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 6.3.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 6.3.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 6.3.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 6.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 6.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the subnets for the VPC to install the cluster in: subnets: - subnet-1 - subnet-2 - subnet-3 Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 6.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.6.2. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 12 14 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 23 Provide the contents of the certificate file that you used for your mirror registry. 24 Provide the imageContentSources section from the output of the command to mirror the repository. 6.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 6.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 6.8.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 6.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 6.1. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 6.2. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.8.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 6.8.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 6.8.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 6.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 6.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.11. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 6.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.13. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "subnets: - subnet-1 - subnet-2 - subnet-3", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "ccoctl aws create-key-pair", "2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer", "ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3", "2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_aws/installing-restricted-networks-aws-installer-provisioned
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/red_hat_jboss_enterprise_application_platform_installation_methods/proc_providing-feedback-on-red-hat-documentation_default
Chapter 8. neutron
Chapter 8. neutron The following chapter contains information about the configuration options in the neutron service. 8.1. dhcp_agent.ini This section contains options for the /etc/neutron/dhcp_agent.ini file. 8.1.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/dhcp_agent.ini file. . Configuration option = Default value Type Description bulk_reload_interval = 0 integer value Time to sleep between reloading the DHCP allocations. This will only be invoked if the value is not 0. If a network has N updates in X seconds then we will reload once with the port changes in the X seconds and not N times. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. dhcp_broadcast_reply = False boolean value Use broadcast in DHCP replies. dhcp_confs = USDstate_path/dhcp string value Location to store DHCP server config files. dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq string value The driver used to manage the DHCP server. dnsmasq_base_log_dir = None string value Base log dir for dnsmasq logging. The log contains DHCP and DNS log information and is useful for debugging issues with either DHCP or DNS. If this section is null, disable dnsmasq log. `dnsmasq_config_file = ` string value Override the default dnsmasq settings with this file. dnsmasq_dns_servers = [] list value Comma-separated list of the DNS servers which will be used as forwarders. dnsmasq_enable_addr6_list = False boolean value Enable dhcp-host entry with list of addresses when port has multiple IPv6 addresses in the same subnet. dnsmasq_lease_max = 16777216 integer value Limit number of leases to prevent a denial-of-service. dnsmasq_local_resolv = False boolean value Enables the dnsmasq service to provide name resolution for instances via DNS resolvers on the host running the DHCP agent. Effectively removes the --no-resolv option from the dnsmasq process arguments. Adding custom DNS resolvers to the dnsmasq_dns_servers option disables this feature. enable_isolated_metadata = False boolean value The DHCP server can assist with providing metadata support on isolated networks. Setting this value to True will cause the DHCP server to append specific host routes to the DHCP request. The metadata service will only be activated when the subnet does not contain any router port. The guest instance must be configured to request host routes via DHCP (Option 121). This option doesn't have any effect when force_metadata is set to True. enable_metadata_network = False boolean value Allows for serving metadata requests coming from a dedicated metadata access network whose CIDR is 169.254.169.254/16 (or larger prefix), and is connected to a Neutron router from which the VMs send metadata:1 request. In this case DHCP Option 121 will not be injected in VMs, as they will be able to reach 169.254.169.254 through a router. This option requires enable_isolated_metadata = True. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. force_metadata = False boolean value In some cases the Neutron router is not present to provide the metadata IP but the DHCP server can be used to provide this info. Setting this value will force the DHCP server to append specific host routes to the DHCP request. If this option is set, then the metadata service will be activated for all the networks. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. interface_driver = None string value The driver used to manage the virtual interface. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". num_sync_threads = 4 integer value Number of threads to use during sync process. Should not exceed connection pool size configured on server. ovs_integration_bridge = br-int string value Name of Open vSwitch bridge to use ovs_use_veth = False boolean value Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) and rate limiting on router's gateway port so long as ovs_use_veth is set to True. publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. resync_interval = 5 integer value The DHCP agent will resync its state with Neutron to recover from any transient notification or RPC errors. The interval is maximum number of seconds between attempts. The resync can be done more often based on the events triggered. resync_throttle = 1 integer value Throttle the number of resync state events between the local DHCP state and Neutron to only once per resync_throttle seconds. The value of throttle introduces a minimum interval between resync state events. Otherwise the resync may end up in a busy-loop. The value must be less than resync_interval. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.1.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/dhcp_agent.ini file. Table 8.1. agent Configuration option = Default value Type Description availability_zone = nova string value Availability zone of this node log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. 8.1.3. ovs The following table outlines the options available under the [ovs] group in the /etc/neutron/dhcp_agent.ini file. Table 8.2. ovs Configuration option = Default value Type Description bridge_mac_table_size = 50000 integer value The maximum number of MAC addresses to learn on a bridge managed by the Neutron OVS agent. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch according to the documentation. igmp_snooping_enable = False boolean value Enable IGMP snooping for integration bridge. If this option is set to True, support for Internet Group Management Protocol (IGMP) is enabled in integration bridge. Setting this option to True will also enable Open vSwitch mcast-snooping-disable-flood-unregistered flag. This option will disable flooding of unregistered multicast packets to all ports. The switch will send unregistered multicast packets only to ports connected to multicast routers. ovsdb_connection = tcp:127.0.0.1:6640 string value The connection string for the OVSDB backend. Will be used for all ovsdb commands and by ovsdb-client when monitoring ovsdb_debug = False boolean value Enable OVSDB debug logs ovsdb_timeout = 10 integer value Timeout in seconds for ovsdb commands. If the timeout expires, ovsdb commands will fail with ALARMCLOCK error. ssl_ca_cert_file = None string value The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_cert_file = None string value The SSL certificate file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_key_file = None string value The SSL private key file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection 8.2. l3_agent.ini This section contains options for the /etc/neutron/l3_agent.ini file. 8.2.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/l3_agent.ini file. . Configuration option = Default value Type Description agent_mode = legacy string value The working mode for the agent. Allowed modes are: legacy - this preserves the existing behavior where the L3 agent is deployed on a centralized networking node to provide L3 services like DNAT, and SNAT. Use this mode if you do not want to adopt DVR. dvr - this mode enables DVR functionality and must be used for an L3 agent that runs on a compute host. dvr_snat - this enables centralized SNAT support in conjunction with DVR. This mode must be used for an L3 agent running on a centralized node (or in single-host deployments, e.g. devstack). dvr_snat mode is not supported on a compute host. dvr_no_external - this mode enables only East/West DVR routing functionality for a L3 agent that runs on a compute host, the North/South functionality such as DNAT and SNAT will be provided by the centralized network node that is running in dvr_snat mode. This mode should be used when there is no external network connectivity on the compute host. api_workers = None integer value Number of separate API worker processes for service. If not specified, the default is equal to the number of CPUs available for best performance, capped by potential RAM usage. cleanup_on_shutdown = False boolean value Delete all routers on L3 agent shutdown. For L3 HA routers it includes a shutdown of keepalived and the state change monitor. NOTE: Setting to True could affect the data plane when stopping or restarting the L3 agent. debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. enable_metadata_proxy = True boolean value Allow running metadata proxy. external_ingress_mark = 0x2 string value Iptables mangle mark used to mark ingress from external network. This mark will be masked with 0xffff so that only the lower 16 bits will be used. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. ha_confs_path = USDstate_path/ha_confs string value Location to store keepalived config files ha_keepalived_state_change_server_threads = <based on operating system> integer value Number of concurrent threads for keepalived server connection requests. More threads create a higher CPU load on the agent node. ha_vrrp_advert_int = 2 integer value The advertisement interval in seconds ha_vrrp_auth_password = None string value VRRP authentication password ha_vrrp_auth_type = PASS string value VRRP authentication type ha_vrrp_health_check_interval = 0 integer value The VRRP health check interval in seconds. Values > 0 enable VRRP health checks. Setting it to 0 disables VRRP health checks. Recommended value is 5. This will cause pings to be sent to the gateway IP address(es) - requires ICMP_ECHO_REQUEST to be enabled on the gateway(s). If a gateway fails, all routers will be reported as primary, and a primary election will be repeated in a round-robin fashion, until one of the routers restores the gateway connection. handle_internal_only_routers = True boolean value Indicates that this L3 agent should also handle routers that do not have an external network gateway configured. This option should be True only for a single agent in a Neutron deployment, and may be False for all agents if all routers must have an external network gateway. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. interface_driver = None string value The driver used to manage the virtual interface. `ipv6_gateway = ` string value With IPv6, the network used for the external gateway does not need to have an associated subnet, since the automatically assigned link-local address (LLA) can be used. However, an IPv6 gateway address is needed for use as the -hop for the default route. If no IPv6 gateway address is configured here, (and only then) the neutron router will be configured to get its default route from router advertisements (RAs) from the upstream router; in which case the upstream router must also be configured to send these RAs. The ipv6_gateway, when configured, should be the LLA of the interface on the upstream router. If a -hop using a global unique address (GUA) is desired, it needs to be done via a subnet allocated to the network and not through this parameter. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_rtr_adv_interval = 100 integer value MaxRtrAdvInterval setting for radvd.conf metadata_access_mark = 0x1 string value Iptables mangle mark used to mark metadata valid requests. This mark will be masked with 0xffff so that only the lower 16 bits will be used. metadata_port = 9697 port value TCP Port used by Neutron metadata namespace proxy. min_rtr_adv_interval = 30 integer value MinRtrAdvInterval setting for radvd.conf ovs_integration_bridge = br-int string value Name of Open vSwitch bridge to use ovs_use_veth = False boolean value Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) and rate limiting on router's gateway port so long as ovs_use_veth is set to True. pd_confs = USDstate_path/pd string value Location to store IPv6 PD files. periodic_fuzzy_delay = 5 integer value Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) periodic_interval = 40 integer value Seconds between running periodic tasks. prefix_delegation_driver = dibbler string value Driver used for ipv6 prefix delegation. This needs to be an entry point defined in the neutron.agent.linux.pd_drivers namespace. See setup.cfg for entry points included with the neutron source. publish_errors = False boolean value Enables or disables publication of error events. ra_confs = USDstate_path/ra string value Location to store IPv6 RA config files `radvd_user = ` string value The username passed to radvd, used to drop root privileges and change user ID to username and group ID to the primary group of username. If no user specified (by default), the user executing the L3 agent will be passed. If "root" specified, because radvd is spawned as root, no "username" parameter will be passed. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. rpc_state_report_workers = 1 integer value Number of RPC worker processes dedicated to state reports queue. rpc_workers = None integer value Number of RPC worker processes for service. If not specified, the default is equal to half the number of API workers. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. vendor_pen = 8888 string value A decimal value as Vendor's Registered Private Enterprise Number as required by RFC3315 DUID-EN. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.2.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/l3_agent.ini file. Table 8.3. agent Configuration option = Default value Type Description availability_zone = nova string value Availability zone of this node extensions = [] list value Extensions list to use log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. 8.2.3. network_log The following table outlines the options available under the [network_log] group in the /etc/neutron/l3_agent.ini file. Table 8.4. network_log Configuration option = Default value Type Description burst_limit = 25 integer value Maximum number of packets per rate_limit. local_output_log_base = None string value Output logfile path on agent side, default syslog file. rate_limit = 100 integer value Maximum packets logging per second. 8.2.4. ovs The following table outlines the options available under the [ovs] group in the /etc/neutron/l3_agent.ini file. Table 8.5. ovs Configuration option = Default value Type Description bridge_mac_table_size = 50000 integer value The maximum number of MAC addresses to learn on a bridge managed by the Neutron OVS agent. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch according to the documentation. igmp_snooping_enable = False boolean value Enable IGMP snooping for integration bridge. If this option is set to True, support for Internet Group Management Protocol (IGMP) is enabled in integration bridge. Setting this option to True will also enable Open vSwitch mcast-snooping-disable-flood-unregistered flag. This option will disable flooding of unregistered multicast packets to all ports. The switch will send unregistered multicast packets only to ports connected to multicast routers. ovsdb_connection = tcp:127.0.0.1:6640 string value The connection string for the OVSDB backend. Will be used for all ovsdb commands and by ovsdb-client when monitoring ovsdb_debug = False boolean value Enable OVSDB debug logs ovsdb_timeout = 10 integer value Timeout in seconds for ovsdb commands. If the timeout expires, ovsdb commands will fail with ALARMCLOCK error. ssl_ca_cert_file = None string value The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_cert_file = None string value The SSL certificate file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_key_file = None string value The SSL private key file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection 8.3. linuxbridge_agent.ini This section contains options for the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. 8.3.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.3.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.6. agent Configuration option = Default value Type Description dscp = None integer value The DSCP value to use for outer headers during tunnel encapsulation. dscp_inherit = False boolean value If set to True, the DSCP value of tunnel interfaces is overwritten and set to inherit. The DSCP value of the inner header is then copied to the outer header. extensions = [] list value Extensions list to use polling_interval = 2 integer value The number of seconds the agent will wait between polling for local device changes. quitting_rpc_timeout = 10 integer value Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If value is set to 0, rpc timeout won't be changed 8.3.3. linux_bridge The following table outlines the options available under the [linux_bridge] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.7. linux_bridge Configuration option = Default value Type Description bridge_mappings = [] list value List of <physical_network>:<physical_bridge> physical_interface_mappings = [] list value Comma-separated list of <physical_network>:<physical_interface> tuples mapping physical network names to the agent's node-specific physical network interfaces to be used for flat and VLAN networks. All physical networks listed in network_vlan_ranges on the server should have mappings to appropriate interfaces on each agent. 8.3.4. network_log The following table outlines the options available under the [network_log] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.8. network_log Configuration option = Default value Type Description burst_limit = 25 integer value Maximum number of packets per rate_limit. local_output_log_base = None string value Output logfile path on agent side, default syslog file. rate_limit = 100 integer value Maximum packets logging per second. 8.3.5. securitygroup The following table outlines the options available under the [securitygroup] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.9. securitygroup Configuration option = Default value Type Description enable_ipset = True boolean value Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset is installed on L2 agent node. enable_security_group = True boolean value Controls whether the neutron security group API is enabled in the server. It should be false when using no security groups or using the nova security group API. firewall_driver = None string value Driver for security groups firewall in the L2 agent permitted_ethertypes = [] list value Comma-separated list of ethertypes to be permitted, in hexadecimal (starting with "0x"). For example, "0x4008" to permit InfiniBand. 8.3.6. vxlan The following table outlines the options available under the [vxlan] group in the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file. Table 8.10. vxlan Configuration option = Default value Type Description arp_responder = False boolean value Enable local ARP responder which provides local responses instead of performing ARP broadcast into the overlay. Enabling local ARP responder is not fully compatible with the allowed-address-pairs extension. enable_vxlan = True boolean value Enable VXLAN on the agent. Can be enabled when agent is managed by ml2 plugin using linuxbridge mechanism driver l2_population = False boolean value Extension to use alongside ml2 plugin's l2population mechanism driver. It enables the plugin to populate VXLAN forwarding table. local_ip = None IP address value IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or IPv6 address that resides on one of the host network interfaces. The IP version of this value must match the value of the overlay_ip_version option in the ML2 plug-in configuration file on the neutron server node(s). multicast_ranges = [] list value Optional comma-separated list of <multicast address>:<vni_min>:<vni_max> triples describing how to assign a multicast address to VXLAN according to its VNI ID. tos = None integer value TOS for vxlan interface protocol packets. This option is deprecated in favor of the dscp option in the AGENT section and will be removed in a future release. To convert the TOS value to DSCP, divide by 4. ttl = None integer value TTL for vxlan interface protocol packets. udp_dstport = None port value The UDP port used for VXLAN communication. By default, the Linux kernel doesn't use the IANA assigned standard value, so if you want to use it, this option must be set to 4789. It is not set by default because of backward compatibility. udp_srcport_max = 0 port value The maximum of the UDP source port range used for VXLAN communication. udp_srcport_min = 0 port value The minimum of the UDP source port range used for VXLAN communication. vxlan_group = 224.0.0.1 string value Multicast group(s) for vxlan interface. A range of group addresses may be specified by using CIDR notation. Specifying a range allows different VNIs to use different group addresses, reducing or eliminating spurious broadcast traffic to the tunnel endpoints. To reserve a unique group for each possible (24-bit) VNI, use a /8 such as 239.0.0.0/8. This setting must be the same on all the agents. 8.4. metadata_agent.ini This section contains options for the /etc/neutron/metadata_agent.ini file. 8.4.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/metadata_agent.ini file. . Configuration option = Default value Type Description auth_ca_cert = None string value Certificate Authority public key (CA cert) file for ssl debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". metadata_backlog = 4096 integer value Number of backlog requests to configure the metadata server socket with `metadata_proxy_group = ` string value Group (gid or name) running metadata proxy after its initialization (if empty: agent effective group). `metadata_proxy_shared_secret = ` string value When proxying metadata requests, Neutron signs the Instance-ID header with a shared secret to prevent spoofing. You may select any string for a secret, but it must match here and in the configuration used by the Nova Metadata Server. NOTE: Nova uses the same config key, but in [neutron] section. metadata_proxy_socket = USDstate_path/metadata_proxy string value Location for Metadata Proxy UNIX domain socket. metadata_proxy_socket_mode = deduce string value Metadata Proxy UNIX domain socket mode, 4 values allowed: deduce : deduce mode from metadata_proxy_user/group values, user : set metadata proxy socket mode to 0o644, to use when metadata_proxy_user is agent effective user or root, group : set metadata proxy socket mode to 0o664, to use when metadata_proxy_group is agent effective group or root, all : set metadata proxy socket mode to 0o666, to use otherwise. `metadata_proxy_user = ` string value User (uid or name) running metadata proxy after its initialization (if empty: agent effective user). metadata_workers = <based on operating system> integer value Number of separate worker processes for metadata server (defaults to 0 when used with ML2/OVN and half of the number of CPUs with other backend drivers) `nova_client_cert = ` string value Client certificate for nova metadata api server. `nova_client_priv_key = ` string value Private key of client certificate. nova_metadata_host = 127.0.0.1 host address value IP address or DNS name of Nova metadata server. nova_metadata_insecure = False boolean value Allow to perform insecure SSL (https) requests to nova metadata nova_metadata_port = 8775 port value TCP Port used by Nova metadata server. nova_metadata_protocol = http string value Protocol to access nova metadata, http or https publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.4.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/metadata_agent.ini file. Table 8.11. agent Configuration option = Default value Type Description log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. 8.4.3. cache The following table outlines the options available under the [cache] group in the /etc/neutron/metadata_agent.ini file. Table 8.12. cache Configuration option = Default value Type Description backend = dogpile.cache.null string value Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend. backend_argument = [] multi valued Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: "<argname>:<value>". config_prefix = cache.oslo string value Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name. dead_timeout = 60 floating point value Time in seconds before attempting to add a node back in the pool in the HashClient's internal mechanisms. debug_cache_backend = False boolean value Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false. enable_retry_client = False boolean value Enable retry client mechanisms to handle failure. Those mechanisms can be used to wrap all kind of pymemcache clients. The wrapper allows you to define how many attempts to make and how long to wait between attemots. enable_socket_keepalive = False boolean value Global toggle for the socket keepalive of dogpile's pymemcache backend enabled = False boolean value Global toggle for caching. expiration_time = 600 integer value Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn't have an explicit cache expiration time defined for it. hashclient_retry_attempts = 2 integer value Amount of times a client should be tried before it is marked dead and removed from the pool in the HashClient's internal mechanisms. hashclient_retry_delay = 1 floating point value Time in seconds that should pass between retry attempts in the HashClient's internal mechanisms. memcache_dead_retry = 300 integer value Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). `memcache_password = ` string value the password for the memcached which SASL enabled memcache_pool_connection_get_timeout = 10 integer value Number of seconds that an operation will wait to get a memcache client connection. memcache_pool_flush_on_reconnect = False boolean value Global toggle if memcache will be flushed on reconnect. (oslo_cache.memcache_pool backend only). memcache_pool_maxsize = 10 integer value Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only). memcache_pool_unused_timeout = 60 integer value Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only). memcache_sasl_enabled = False boolean value Enable the SASL(Simple Authentication and SecurityLayer) if the SASL_enable is true, else disable. memcache_servers = ['localhost:11211'] list value Memcache servers in the format of "host:port". This is used by backends dependent on Memcached.If dogpile.cache.memcached or oslo_cache.memcache_pool is used and a given host refer to an IPv6 or a given domain refer to IPv6 then you should prefix the given address withthe address family ( inet6 ) (e.g inet6[::1]:11211 , inet6:[fd12:3456:789a:1::1]:11211 , inet6:[controller-0.internalapi]:11211 ). If the address family is not given then these backends will use the default inet address family which corresponds to IPv4 memcache_socket_timeout = 1.0 floating point value Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). `memcache_username = ` string value the user name for the memcached which SASL enabled proxies = [] list value Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior. retry_attempts = 2 integer value Number of times to attempt an action before failing. retry_delay = 0 floating point value Number of seconds to sleep between each attempt. socket_keepalive_count = 1 integer value The maximum number of keepalive probes TCP should send before dropping the connection. Should be a positive integer greater than zero. socket_keepalive_idle = 1 integer value The time (in seconds) the connection needs to remain idle before TCP starts sending keepalive probes. Should be a positive integer most greater than zero. socket_keepalive_interval = 1 integer value The time (in seconds) between individual keepalive probes. Should be a positive integer greater than zero. tls_allowed_ciphers = None string value Set the available ciphers for sockets created with the TLS context. It should be a string in the OpenSSL cipher list format. If not specified, all OpenSSL enabled ciphers will be available. tls_cafile = None string value Path to a file of concatenated CA certificates in PEM format necessary to establish the caching servers' authenticity. If tls_enabled is False, this option is ignored. tls_certfile = None string value Path to a single file in PEM format containing the client's certificate as well as any number of CA certificates needed to establish the certificate's authenticity. This file is only required when client side authentication is necessary. If tls_enabled is False, this option is ignored. tls_enabled = False boolean value Global toggle for TLS usage when comunicating with the caching servers. tls_keyfile = None string value Path to a single file containing the client's private key in. Otherwise the private key will be taken from the file specified in tls_certfile. If tls_enabled is False, this option is ignored. 8.5. metering_agent.ini This section contains options for the /etc/neutron/metering_agent.ini file. 8.5.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/metering_agent.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. driver = neutron.services.metering.drivers.noop.noop_driver.NoopMeteringDriver string value Metering driver fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. granular_traffic_data = False boolean value Defines if the metering agent driver should present traffic data in a granular fashion, instead of grouping all of the traffic data for all projects and routers where the labels were assigned to. The default value is False for backward compatibility. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. interface_driver = None string value The driver used to manage the virtual interface. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". measure_interval = 30 integer value Interval between two metering measures ovs_integration_bridge = br-int string value Name of Open vSwitch bridge to use ovs_use_veth = False boolean value Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g. RHEL 6.5) and rate limiting on router's gateway port so long as ovs_use_veth is set to True. publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. report_interval = 300 integer value Interval between two metering reports rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.5.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/metering_agent.ini file. Table 8.13. agent Configuration option = Default value Type Description log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. 8.5.3. ovs The following table outlines the options available under the [ovs] group in the /etc/neutron/metering_agent.ini file. Table 8.14. ovs Configuration option = Default value Type Description bridge_mac_table_size = 50000 integer value The maximum number of MAC addresses to learn on a bridge managed by the Neutron OVS agent. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch according to the documentation. igmp_snooping_enable = False boolean value Enable IGMP snooping for integration bridge. If this option is set to True, support for Internet Group Management Protocol (IGMP) is enabled in integration bridge. Setting this option to True will also enable Open vSwitch mcast-snooping-disable-flood-unregistered flag. This option will disable flooding of unregistered multicast packets to all ports. The switch will send unregistered multicast packets only to ports connected to multicast routers. ovsdb_connection = tcp:127.0.0.1:6640 string value The connection string for the OVSDB backend. Will be used for all ovsdb commands and by ovsdb-client when monitoring ovsdb_debug = False boolean value Enable OVSDB debug logs ovsdb_timeout = 10 integer value Timeout in seconds for ovsdb commands. If the timeout expires, ovsdb commands will fail with ALARMCLOCK error. ssl_ca_cert_file = None string value The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_cert_file = None string value The SSL certificate file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_key_file = None string value The SSL private key file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection 8.6. ml2_conf.ini This section contains options for the /etc/neutron/plugins/ml2/ml2_conf.ini file. 8.6.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.6.2. ml2 The following table outlines the options available under the [ml2] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.15. ml2 Configuration option = Default value Type Description extension_drivers = [] list value An ordered list of extension driver entrypoints to be loaded from the neutron.ml2.extension_drivers namespace. For example: extension_drivers = port_security,qos external_network_type = None string value Default network type for external networks when no provider attributes are specified. By default it is None, which means that if provider attributes are not specified while creating external networks then they will have the same type as tenant networks. Allowed values for external_network_type config option depend on the network type values configured in type_drivers config option. mechanism_drivers = [] list value An ordered list of networking mechanism driver entrypoints to be loaded from the neutron.ml2.mechanism_drivers namespace. overlay_ip_version = 4 integer value IP version of all overlay (tunnel) network endpoints. Use a value of 4 for IPv4 or 6 for IPv6. path_mtu = 0 integer value Maximum size of an IP packet (MTU) that can traverse the underlying physical network infrastructure without fragmentation when using an overlay/tunnel protocol. This option allows specifying a physical network MTU value that differs from the default global_physnet_mtu value. physical_network_mtus = [] list value A list of mappings of physical networks to MTU values. The format of the mapping is <physnet>:<mtu val>. This mapping allows specifying a physical network MTU value that differs from the default global_physnet_mtu value. tenant_network_types = ['local'] list value Ordered list of network_types to allocate as tenant networks. The default value local is useful for single-box testing but provides no connectivity between hosts. tunnelled_network_rp_name = rp_tunnelled string value Resource provider name for the host with tunnelled networks. This resource provider represents the available bandwidth for all tunnelled networks in a compute node. NOTE: this parameter is used both by the Neutron server and the mechanism driver agents; it is recommended not to change it once any resource provider register has been created. type_drivers = ['local', 'flat', 'vlan', 'gre', 'vxlan', 'geneve'] list value List of network type driver entrypoints to be loaded from the neutron.ml2.type_drivers namespace. 8.6.3. ml2_type_flat The following table outlines the options available under the [ml2_type_flat] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.16. ml2_type_flat Configuration option = Default value Type Description flat_networks = * list value List of physical_network names with which flat networks can be created. Use default * to allow flat networks with arbitrary physical_network names. Use an empty list to disable flat networks. 8.6.4. ml2_type_geneve The following table outlines the options available under the [ml2_type_geneve] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.17. ml2_type_geneve Configuration option = Default value Type Description max_header_size = 30 integer value The maximum allowed Geneve encapsulation header size (in bytes). Geneve header is extensible, this value is used to calculate the maximum MTU for Geneve-based networks. The default is 30, which is the size of the Geneve header without any additional option headers. Note the default is not enough for OVN which requires at least 38. vni_ranges = [] list value Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of Geneve VNI IDs that are available for tenant network allocation. Note OVN does not use the actual values. 8.6.5. ml2_type_gre The following table outlines the options available under the [ml2_type_gre] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.18. ml2_type_gre Configuration option = Default value Type Description tunnel_id_ranges = [] list value Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation 8.6.6. ml2_type_vlan The following table outlines the options available under the [ml2_type_vlan] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.19. ml2_type_vlan Configuration option = Default value Type Description network_vlan_ranges = [] list value List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network> specifying physical_network names usable for VLAN provider and tenant networks, as well as ranges of VLAN tags on each available for allocation to tenant networks. If no range is defined, the whole valid VLAN ID set [1, 4094] will be assigned. 8.6.7. ml2_type_vxlan The following table outlines the options available under the [ml2_type_vxlan] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.20. ml2_type_vxlan Configuration option = Default value Type Description vni_ranges = [] list value Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of VXLAN VNI IDs that are available for tenant network allocation vxlan_group = None string value Multicast group for VXLAN. When configured, will enable sending all broadcast traffic to this multicast group. When left unconfigured, will disable multicast VXLAN mode. 8.6.8. ovs_driver The following table outlines the options available under the [ovs_driver] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.21. ovs_driver Configuration option = Default value Type Description vnic_type_prohibit_list = [] list value Comma-separated list of VNIC types for which support is administratively prohibited by the mechanism driver. Please note that the supported vnic_types depend on your network interface card, on the kernel version of your operating system, and on other factors, like OVS version. In case of ovs mechanism driver the valid vnic types are normal and direct. Note that direct is supported only from kernel 4.8, and from ovs 2.8.0. Bind DIRECT (SR-IOV) port allows to offload the OVS flows using tc to the SR-IOV NIC. This allows to support hardware offload via tc and that allows us to manage the VF by OpenFlow control plane using representor net-device. 8.6.9. securitygroup The following table outlines the options available under the [securitygroup] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.22. securitygroup Configuration option = Default value Type Description enable_ipset = True boolean value Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset is installed on L2 agent node. enable_security_group = True boolean value Controls whether the neutron security group API is enabled in the server. It should be false when using no security groups or using the nova security group API. firewall_driver = None string value Driver for security groups firewall in the L2 agent permitted_ethertypes = [] list value Comma-separated list of ethertypes to be permitted, in hexadecimal (starting with "0x"). For example, "0x4008" to permit InfiniBand. 8.6.10. sriov_driver The following table outlines the options available under the [sriov_driver] group in the /etc/neutron/plugins/ml2/ml2_conf.ini file. Table 8.23. sriov_driver Configuration option = Default value Type Description vnic_type_prohibit_list = [] list value Comma-separated list of VNIC types for which support is administratively prohibited by the mechanism driver. Please note that the supported vnic_types depend on your network interface card, on the kernel version of your operating system, and on other factors. In case of sriov mechanism driver the valid VNIC types are direct, macvtap and direct-physical. 8.7. neutron.conf This section contains options for the /etc/neutron/neutron.conf file. 8.7.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/neutron.conf file. . Configuration option = Default value Type Description agent_down_time = 75 integer value Seconds to regard the agent is down; should be at least twice report_interval, to be sure the agent is down for good. allow_automatic_dhcp_failover = True boolean value Automatically remove networks from offline DHCP agents. allow_automatic_l3agent_failover = False boolean value Automatically reschedule routers from offline L3 agents to online L3 agents. allow_bulk = True boolean value Allow the usage of the bulk API allowed_conntrack_helpers = [{'amanda': 'tcp'}, {'ftp': 'tcp'}, {'h323': 'udp'}, {'h323': 'tcp'}, {'irc': 'tcp'}, {'netbios-ns': 'udp'}, {'pptp': 'tcp'}, {'sane': 'tcp'}, {'sip': 'udp'}, {'sip': 'tcp'}, {'snmp': 'udp'}, {'tftp': 'udp'}] list value Defines the allowed conntrack helpers, and conntack helper module protocol constraints. `api_extensions_path = ` string value The path for API extensions. Note that this can be a colon-separated list of paths. For example: api_extensions_path = extensions:/path/to/more/exts:/even/more/exts. The path of neutron.extensions is appended to this, so if your extensions are in there you don't need to specify them here. api_paste_config = api-paste.ini string value File name for the paste.deploy config for api service api_workers = None integer value Number of separate API worker processes for service. If not specified, the default is equal to the number of CPUs available for best performance, capped by potential RAM usage. auth_strategy = keystone string value The type of authentication to use backdoor_port = None string value Enable eventlet backdoor. Acceptable values are 0, <port>, and <start>:<end>, where 0 results in listening on a random tcp port number; <port> results in listening on the specified port number (and not enabling backdoor if that port is in use); and <start>:<end> results in listening on the smallest unused port number within the specified range of port numbers. The chosen port is displayed in the service's log file. backdoor_socket = None string value Enable eventlet backdoor, using the provided path as a unix socket that can receive connections. This option is mutually exclusive with backdoor_port in that only one should be provided. If both are provided then the existence of this option overrides the usage of that option. Inside the path {pid} will be replaced with the PID of the current process. backlog = 4096 integer value Number of backlog requests to configure the socket with base_mac = fa:16:3e:00:00:00 string value The base MAC address Neutron will use for VIFs. The first 3 octets will remain unchanged. If the 4th octet is not 00, it will also be used. The others will be randomly generated. bind_host = 0.0.0.0 host address value The host IP to bind to. bind_port = 9696 port value The port to bind to client_socket_timeout = 900 integer value Timeout for client connections' socket operations. If an incoming connection is idle for this number of seconds it will be closed. A value of 0 means wait forever. conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool control_exchange = openstack string value The default exchange under which topics are scoped. May be overridden by an exchange name specified in the transport_url option. core_plugin = None string value The core plugin Neutron will use debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_availability_zones = [] list value Default value of availability zone hints. The availability zone aware schedulers use this when the resources availability_zone_hints is empty. Multiple availability zones can be specified by a comma separated string. This value can be empty. In this case, even if availability_zone_hints for a resource is empty, availability zone is considered for high availability while scheduling the resource. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. dhcp_agent_notification = True boolean value Allow sending resource operation notification to DHCP agent dhcp_agents_per_network = 1 integer value Number of DHCP agents scheduled to host a tenant network. If this number is greater than 1, the scheduler automatically assigns multiple DHCP agents for a given tenant network, providing high availability for the DHCP service. However this does not provide high availability for the IPv6 metadata service in isolated networks. dhcp_lease_duration = 86400 integer value DHCP lease duration (in seconds). Use -1 to tell dnsmasq to use infinite lease times. dhcp_load_type = networks string value Representing the resource type whose load is being reported by the agent. This can be "networks", "subnets" or "ports". When specified (Default is networks), the server will extract particular load sent as part of its agent configuration object from the agent report state, which is the number of resources being consumed, at every report_interval.dhcp_load_type can be used in combination with network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler When the network_scheduler_driver is WeightScheduler, dhcp_load_type can be configured to represent the choice for the resource being balanced. Example: dhcp_load_type=networks dns_domain = openstacklocal string value Domain to use for building the hostnames dvr_base_mac = fa:16:3f:00:00:00 string value The base mac address used for unique DVR instances by Neutron. The first 3 octets will remain unchanged. If the 4th octet is not 00, it will also be used. The others will be randomly generated. The dvr_base_mac must be different from base_mac to avoid mixing them up with MAC's allocated for tenant ports. A 4 octet example would be dvr_base_mac = fa:16:3f:4f:00:00. The default is 3 octet enable_dvr = True boolean value Determine if setup is configured for DVR. If False, DVR API extension will be disabled. enable_new_agents = True boolean value Agent starts with admin_state_up=False when enable_new_agents=False. In the case, user's resources will not be scheduled automatically to the agent until admin changes admin_state_up to True. enable_services_on_agents_with_admin_state_down = False boolean value Enable services on an agent with admin_state_up False. If this option is False, when admin_state_up of an agent is turned False, services on it will be disabled. Agents with admin_state_up False are not selected for automatic scheduling regardless of this option. But manual scheduling to such agents is available if this option is True. enable_snat_by_default = True boolean value Define the default value of enable_snat if not provided in external_gateway_info. enable_traditional_dhcp = True boolean value If False, neutron-server will disable the following DHCP-agent related functions:1. DHCP provisioning block 2. DHCP scheduler API extension 3. Network scheduling mechanism 4. DHCP RPC/notification executor_thread_pool_size = 64 integer value Size of executor thread pool when executor is threading or eventlet. external_dns_driver = None string value Driver for external DNS integration. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. filter_validation = True boolean value If True, then allow plugins to decide whether to perform validations on filter parameters. Filter validation is enabled if this config is turned on and it is supported by all plugins global_physnet_mtu = 1500 integer value MTU of the underlying physical network. Neutron uses this value to calculate MTU for all virtual network components. For flat and VLAN networks, neutron uses this value without modification. For overlay networks such as VXLAN, neutron automatically subtracts the overlay protocol overhead from this value. Defaults to 1500, the standard value for Ethernet. graceful_shutdown_timeout = 60 integer value Specify a timeout after which a gracefully shutdown server will exit. Zero value means endless wait. host = <based on operating system> host address value Hostname to be used by the Neutron server, agents and services running on this machine. All the agents and services running on this machine must use the same host value. host_dvr_for_dhcp = True boolean value Flag to determine if hosting a DVR local router to the DHCP agent is desired. If False, any L3 function supported by the DHCP agent instance will not be possible, for instance: DNS. http_retries = 3 integer value Number of times client connections (nova, ironic) should be retried on a failed HTTP call. 0 (zero) means connection is attempted only once (not retried). Setting to any positive integer means that on failure the connection is retried that many times. For example, setting to 3 means total attempts to connect will be 4. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. interface_driver = None string value The driver used to manage the virtual interface. ipam_driver = internal string value Neutron IPAM (IP address management) driver to use. By default, the reference implementation of the Neutron IPAM driver is used. ipv6_pd_enabled = False boolean value Warning: This feature is experimental with low test coverage, and the Dibbler client which is used for this feature is no longer maintained! Enables IPv6 Prefix Delegation for automatic subnet CIDR allocation. Set to True to enable IPv6 Prefix Delegation for subnet allocation in a PD-capable environment. Users making subnet creation requests for IPv6 subnets without providing a CIDR or subnetpool ID will be given a CIDR via the Prefix Delegation mechanism. Note that enabling PD will override the behavior of the default IPv6 subnetpool. l3_ha = False boolean value Enable HA mode for virtual routers. l3_ha_net_cidr = 169.254.192.0/18 string value Subnet used for the l3 HA admin network. `l3_ha_network_physical_name = ` string value The physical network name with which the HA network can be created. `l3_ha_network_type = ` string value The network type to use when creating the HA network for an HA router. By default or if empty, the first tenant_network_types is used. This is helpful when the VRRP traffic should use a specific network which is not the default one. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_options = True boolean value Enables or disables logging values of all registered options when starting a service (at DEBUG level). log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_allowed_address_pair = 10 integer value Maximum number of allowed address pairs max_dns_nameservers = 5 integer value Maximum number of DNS nameservers per subnet max_header_line = 16384 integer value Maximum line size of message headers to be accepted. max_header_line may need to be increased when using large tokens (typically those generated when keystone is configured to use PKI tokens with big service catalogs). max_l3_agents_per_router = 3 integer value Maximum number of L3 agents which a HA router will be scheduled on. If it is set to 0 then the router will be scheduled on every agent. max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". max_routes = 30 integer value Maximum number of routes per router max_subnet_host_routes = 20 integer value Maximum number of host routes per subnet `metadata_proxy_group = ` string value Group (gid or name) running metadata proxy after its initialization (if empty: agent effective group). metadata_proxy_socket = USDstate_path/metadata_proxy string value Location for Metadata Proxy UNIX domain socket. `metadata_proxy_user = ` string value User (uid or name) running metadata proxy after its initialization (if empty: agent effective user). network_auto_schedule = True boolean value Allow auto scheduling networks to DHCP agent. network_link_prefix = None string value This string is prepended to the normal URL that is returned in links to the OpenStack Network API. If it is empty (the default), the URLs are returned unchanged. network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler string value Driver to use for scheduling network to DHCP agent notify_nova_on_port_data_changes = True boolean value Send notification to nova when port data (fixed_ips/floatingip) changes so nova can update its cache. notify_nova_on_port_status_changes = True boolean value Send notification to nova when port status changes pagination_max_limit = -1 string value The maximum number of items returned in a single response, value was infinite or negative integer means no limit periodic_fuzzy_delay = 5 integer value Range of seconds to randomly delay when starting the periodic task scheduler to reduce stampeding. (Disable by setting to 0) periodic_interval = 40 integer value Seconds between running periodic tasks. publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. retry_until_window = 30 integer value Number of seconds to keep retrying to listen router_auto_schedule = True boolean value Allow auto scheduling of routers to L3 agent. router_distributed = False boolean value System-wide flag to determine the type of router that tenants can create. Only admin can override. router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.LeastRoutersScheduler string value Driver to use for scheduling router to a default L3 agent rpc_conn_pool_size = 30 integer value Size of RPC connection pool. rpc_ping_enabled = False boolean value Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping rpc_resources_processing_step = 20 integer value Number of resources for neutron to divide the large RPC call data sets. It can be reduced if RPC timeout occurred. The best value can be determined empirically in your environment. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. rpc_response_timeout = 60 integer value Seconds to wait for a response from a call. rpc_state_report_workers = 1 integer value Number of RPC worker processes dedicated to state reports queue. rpc_workers = None integer value Number of RPC worker processes for service. If not specified, the default is equal to half the number of API workers. run_external_periodic_tasks = True boolean value Some periodic tasks can be run in a separate process. Should we run them here? send_events_interval = 2 integer value Number of seconds between sending events to nova if there are any events to send. service_plugins = [] list value The service plugins Neutron will use setproctitle = on string value Set process name to match child worker role. Available options are: off - retains the behavior; on - renames processes to neutron-server: role (original string) ; brief - renames the same as on , but without the original string, such as neutron-server: role . state_path = /var/lib/neutron string value Where to store Neutron state files. This directory must be writable by the agent. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. tcp_keepidle = 600 integer value Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X. transport_url = rabbit:// string value The network address and optional user credentials for connecting to the messaging backend, in URL format. The expected format is: driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query Example: rabbit://rabbitmq:[email protected]:5672// For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_ssl = False boolean value Enable SSL on the API server use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. vlan_transparent = False boolean value If True, then allow plugins that support it to create VLAN transparent networks. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. wsgi_default_pool_size = 100 integer value Size of the pool of greenthreads used by wsgi wsgi_keep_alive = True boolean value If False, closes the client socket connection explicitly. wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f string value A python format string that is used as the template to generate log lines. The following values can beformatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds. wsgi_server_debug = False boolean value True if the server should send exception tracebacks to the clients on 500 errors. If False, the server will respond with empty bodies. 8.7.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/neutron.conf file. Table 8.24. agent Configuration option = Default value Type Description availability_zone = nova string value Availability zone of this node check_child_processes_action = respawn string value Action to be executed when a child process dies check_child_processes_interval = 60 integer value Interval between checks of child process liveness (seconds), use 0 to disable comment_iptables_rules = True boolean value Add comments to iptables rules. Set to false to disallow the addition of comments to generated iptables rules that describe each rule's purpose. System must support the iptables comments module for addition of comments. debug_iptables_rules = False boolean value Duplicate every iptables difference calculation to ensure the format being generated matches the format of iptables-save. This option should not be turned on for production systems because it imposes a performance penalty. kill_scripts_path = /etc/neutron/kill_scripts/ string value Location of scripts used to kill external processes. Names of scripts here must follow the pattern: "<process-name>-kill" where <process-name> is name of the process which should be killed using this script. For example, kill script for dnsmasq process should be named "dnsmasq-kill". If path is set to None, then default "kill" command will be used to stop processes. log_agent_heartbeats = False boolean value Log agent heartbeats report_interval = 30 floating point value Seconds between nodes reporting state to server; should be less than agent_down_time, best if it is half or less than agent_down_time. root_helper = sudo string value Root helper application. Use sudo neutron-rootwrap /etc/neutron/rootwrap.conf to use the real root filter facility. Change to sudo to skip the filtering and just run the command directly. root_helper_daemon = None string value Root helper daemon application to use when possible. Use sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf to run rootwrap in "daemon mode" which has been reported to improve performance at scale. For more information on running rootwrap in "daemon mode", see: https://docs.openstack.org/oslo.rootwrap/latest/user/usage.html#daemon-mode use_helper_for_ns_read = True boolean value Use the root helper when listing the namespaces on a system. This may not be required depending on the security configuration. If the root helper is not required, set this to False for a performance improvement. use_random_fully = True boolean value Use random-fully in SNAT masquerade rules. 8.7.3. cache The following table outlines the options available under the [cache] group in the /etc/neutron/neutron.conf file. Table 8.25. cache Configuration option = Default value Type Description backend = dogpile.cache.null string value Cache backend module. For eventlet-based or environments with hundreds of threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test environments with a single instance of the server can use the dogpile.cache.memory backend. backend_argument = [] multi valued Arguments supplied to the backend module. Specify this option once per argument to be passed to the dogpile.cache backend. Example format: "<argname>:<value>". config_prefix = cache.oslo string value Prefix for building the configuration dictionary for the cache region. This should not need to be changed unless there is another dogpile.cache region with the same configuration name. dead_timeout = 60 floating point value Time in seconds before attempting to add a node back in the pool in the HashClient's internal mechanisms. debug_cache_backend = False boolean value Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really useful if you need to see the specific cache-backend get/set/delete calls with the keys/values. Typically this should be left set to false. enable_retry_client = False boolean value Enable retry client mechanisms to handle failure. Those mechanisms can be used to wrap all kind of pymemcache clients. The wrapper allows you to define how many attempts to make and how long to wait between attemots. enable_socket_keepalive = False boolean value Global toggle for the socket keepalive of dogpile's pymemcache backend enabled = False boolean value Global toggle for caching. expiration_time = 600 integer value Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any cached method that doesn't have an explicit cache expiration time defined for it. hashclient_retry_attempts = 2 integer value Amount of times a client should be tried before it is marked dead and removed from the pool in the HashClient's internal mechanisms. hashclient_retry_delay = 1 floating point value Time in seconds that should pass between retry attempts in the HashClient's internal mechanisms. memcache_dead_retry = 300 integer value Number of seconds memcached server is considered dead before it is tried again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). `memcache_password = ` string value the password for the memcached which SASL enabled memcache_pool_connection_get_timeout = 10 integer value Number of seconds that an operation will wait to get a memcache client connection. memcache_pool_flush_on_reconnect = False boolean value Global toggle if memcache will be flushed on reconnect. (oslo_cache.memcache_pool backend only). memcache_pool_maxsize = 10 integer value Max total number of open connections to every memcached server. (oslo_cache.memcache_pool backend only). memcache_pool_unused_timeout = 60 integer value Number of seconds a connection to memcached is held unused in the pool before it is closed. (oslo_cache.memcache_pool backend only). memcache_sasl_enabled = False boolean value Enable the SASL(Simple Authentication and SecurityLayer) if the SASL_enable is true, else disable. memcache_servers = ['localhost:11211'] list value Memcache servers in the format of "host:port". This is used by backends dependent on Memcached.If dogpile.cache.memcached or oslo_cache.memcache_pool is used and a given host refer to an IPv6 or a given domain refer to IPv6 then you should prefix the given address withthe address family ( inet6 ) (e.g inet6[::1]:11211 , inet6:[fd12:3456:789a:1::1]:11211 , inet6:[controller-0.internalapi]:11211 ). If the address family is not given then these backends will use the default inet address family which corresponds to IPv4 memcache_socket_timeout = 1.0 floating point value Timeout in seconds for every call to a server. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). `memcache_username = ` string value the user name for the memcached which SASL enabled proxies = [] list value Proxy classes to import that will affect the way the dogpile.cache backend functions. See the dogpile.cache documentation on changing-backend-behavior. retry_attempts = 2 integer value Number of times to attempt an action before failing. retry_delay = 0 floating point value Number of seconds to sleep between each attempt. socket_keepalive_count = 1 integer value The maximum number of keepalive probes TCP should send before dropping the connection. Should be a positive integer greater than zero. socket_keepalive_idle = 1 integer value The time (in seconds) the connection needs to remain idle before TCP starts sending keepalive probes. Should be a positive integer most greater than zero. socket_keepalive_interval = 1 integer value The time (in seconds) between individual keepalive probes. Should be a positive integer greater than zero. tls_allowed_ciphers = None string value Set the available ciphers for sockets created with the TLS context. It should be a string in the OpenSSL cipher list format. If not specified, all OpenSSL enabled ciphers will be available. tls_cafile = None string value Path to a file of concatenated CA certificates in PEM format necessary to establish the caching servers' authenticity. If tls_enabled is False, this option is ignored. tls_certfile = None string value Path to a single file in PEM format containing the client's certificate as well as any number of CA certificates needed to establish the certificate's authenticity. This file is only required when client side authentication is necessary. If tls_enabled is False, this option is ignored. tls_enabled = False boolean value Global toggle for TLS usage when comunicating with the caching servers. tls_keyfile = None string value Path to a single file containing the client's private key in. Otherwise the private key will be taken from the file specified in tls_certfile. If tls_enabled is False, this option is ignored. 8.7.4. cors The following table outlines the options available under the [cors] group in the /etc/neutron/neutron.conf file. Table 8.26. cors Configuration option = Default value Type Description allow_credentials = True boolean value Indicate that the actual request can include user credentials allow_headers = ['X-Auth-Token', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id', 'X-OpenStack-Request-ID'] list value Indicate which header field names may be used during the actual request. allow_methods = ['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] list value Indicate which methods can be used during the actual request. allowed_origin = None list value Indicate whether this resource may be shared with the domain received in the requests "origin" header. Format: "<protocol>://<host>[:<port>]", no trailing slash. Example: https://horizon.example.com expose_headers = ['X-Auth-Token', 'X-Subject-Token', 'X-Service-Token', 'X-OpenStack-Request-ID', 'OpenStack-Volume-microversion'] list value Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers. max_age = 3600 integer value Maximum cache age of CORS preflight requests. 8.7.5. database The following table outlines the options available under the [database] group in the /etc/neutron/neutron.conf file. Table 8.27. database Configuration option = Default value Type Description backend = sqlalchemy string value The back end to use for the database. connection = None string value The SQLAlchemy connection string to use to connect to the database. connection_debug = 0 integer value Verbosity of SQL debugging information: 0=None, 100=Everything. `connection_parameters = ` string value Optional URL parameters to append onto the connection URL at connect time; specify as param1=value1&param2=value2&... connection_recycle_time = 3600 integer value Connections which have been present in the connection pool longer than this number of seconds will be replaced with a new one the time they are checked out from the pool. connection_trace = False boolean value Add Python stack traces to SQL as comment strings. db_inc_retry_interval = True boolean value If True, increases the interval between retries of a database operation up to db_max_retry_interval. db_max_retries = 20 integer value Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to specify an infinite retry count. db_max_retry_interval = 10 integer value If db_inc_retry_interval is set, the maximum seconds between retries of a database operation. db_retry_interval = 1 integer value Seconds between retries of a database transaction. `engine = ` string value Database engine for which script will be generated when using offline migration. max_overflow = 50 integer value If set, use this value for max_overflow with SQLAlchemy. max_pool_size = 5 integer value Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no limit. max_retries = 10 integer value Maximum number of database connection retries during startup. Set to -1 to specify an infinite retry count. mysql_enable_ndb = False boolean value If True, transparently enables support for handling MySQL Cluster (NDB). Deprecated since: 12.1.0 *Reason:*Support for the MySQL NDB Cluster storage engine has been deprecated and will be removed in a future release. mysql_sql_mode = TRADITIONAL string value The SQL mode to be used for MySQL sessions. This option, including the default, overrides any server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no value. Example: mysql_sql_mode= mysql_wsrep_sync_wait = None integer value For Galera only, configure wsrep_sync_wait causality checks on new connections. Default is None, meaning don't configure any setting. pool_timeout = None integer value If set, use this value for pool_timeout with SQLAlchemy. retry_interval = 10 integer value Interval between retries of opening a SQL connection. slave_connection = None string value The SQLAlchemy connection string to use to connect to the slave database. sqlite_synchronous = True boolean value If True, SQLite uses synchronous mode. use_db_reconnect = False boolean value Enable the experimental use of database reconnect on connection lost. 8.7.6. designate The following table outlines the options available under the [designate] group in the /etc/neutron/neutron.conf file. Table 8.28. designate Configuration option = Default value Type Description admin_auth_url = None string value Authorization URL for connecting to designate in admin context Deprecated since: Xena *Reason:*This option will be completely replaced by keystoneauth parameters. admin_password = None string value Password for connecting to designate in admin context Deprecated since: Xena *Reason:*This option will be completely replaced by keystoneauth parameters. admin_tenant_id = None string value Tenant id for connecting to designate in admin context Deprecated since: Xena *Reason:*This option will be completely replaced by keystoneauth parameters. admin_tenant_name = None string value Tenant name for connecting to designate in admin context Deprecated since: Xena *Reason:*This option will be completely replaced by keystoneauth parameters. admin_username = None string value Username for connecting to designate in admin context Deprecated since: Xena *Reason:*This option will be completely replaced by keystoneauth parameters. allow_reverse_dns_lookup = True boolean value Allow the creation of PTR records auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to insecure = False boolean value Verify HTTPS connections. ipv4_ptr_zone_prefix_size = 24 integer value Number of bits in an ipv4 PTR zone that will be considered network prefix. It has to align to byte boundary. Minimum value is 8. Maximum value is 24. As a consequence, range of values is 8, 16 and 24 ipv6_ptr_zone_prefix_size = 120 integer value Number of bits in an ipv6 PTR zone that will be considered network prefix. It has to align to nyble boundary. Minimum value is 4. Maximum value is 124. As a consequence, range of values is 4, 8, 12, 16,... , 124 keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to `ptr_zone_email = ` string value The email address to be used when creating PTR zones. If not specified, the email address will be admin@<dns_domain> split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use url = None string value URL for connecting to designate user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username 8.7.7. experimental The following table outlines the options available under the [experimental] group in the /etc/neutron/neutron.conf file. Table 8.29. experimental Configuration option = Default value Type Description linuxbridge = False boolean value Enable execution of the experimental Linuxbridge agent. 8.7.8. healthcheck The following table outlines the options available under the [healthcheck] group in the /etc/neutron/neutron.conf file. Table 8.30. healthcheck Configuration option = Default value Type Description backends = [] list value Additional backends that can perform health checks and report that information back as part of a request. detailed = False boolean value Show more detailed information as part of the response. Security note: Enabling this option may expose sensitive details about the service being monitored. Be sure to verify that it will not violate your security policies. disable_by_file_path = None string value Check the presence of a file to determine if an application is running on a port. Used by DisableByFileHealthcheck plugin. disable_by_file_paths = [] list value Check the presence of a file based on a port to determine if an application is running on a port. Expects a "port:path" list of strings. Used by DisableByFilesPortsHealthcheck plugin. path = /healthcheck string value The path to respond to healtcheck requests on. 8.7.9. ironic The following table outlines the options available under the [ironic] group in the /etc/neutron/neutron.conf file. Table 8.31. ironic Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to enable_notifications = False boolean value Send notification events to ironic. (For example on relevant port status changes.) insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username 8.7.10. keystone_authtoken The following table outlines the options available under the [keystone_authtoken] group in the /etc/neutron/neutron.conf file. Table 8.32. keystone_authtoken Configuration option = Default value Type Description auth_section = None string value Config Section from which to load plugin specific options auth_type = None string value Authentication type to load auth_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the S release. Deprecated since: Queens *Reason:*The auth_uri option is deprecated in favor of www_authenticate_uri and will be removed in the S release. auth_version = None string value API version of the Identity API endpoint. cache = None string value Request environment key where the Swift cache object is stored. When auth_token middleware is deployed with a Swift cache, use this option to have the middleware share a caching backend with swift. Otherwise, use the memcached_servers option instead. cafile = None string value A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to system CAs. certfile = None string value Required if identity server requires client certificate delay_auth_decision = False boolean value Do not handle authorization requests within the middleware, but delegate the authorization decision to downstream WSGI components. enforce_token_bind = permissive string value Used to control the use and type of token binding. Can be set to: "disabled" to not check token binding. "permissive" (default) to validate binding information if the bind type is of a form known to the server and ignore it if not. "strict" like "permissive" but if the bind type is unknown the token will be rejected. "required" any form of token binding is needed to be allowed. Finally the name of a binding method that must be present in tokens. http_connect_timeout = None integer value Request timeout value for communicating with Identity API server. http_request_max_retries = 3 integer value How many times are we trying to reconnect when communicating with Identity API Server. include_service_catalog = True boolean value (Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask for service catalog on token validation and will not set the X-Service-Catalog header. insecure = False boolean value Verify HTTPS connections. interface = internal string value Interface to use for the Identity API endpoint. Valid values are "public", "internal" (default) or "admin". keyfile = None string value Required if identity server requires client certificate memcache_pool_conn_get_timeout = 10 integer value (Optional) Number of seconds that an operation will wait to get a memcached client connection from the pool. memcache_pool_dead_retry = 300 integer value (Optional) Number of seconds memcached server is considered dead before it is tried again. memcache_pool_maxsize = 10 integer value (Optional) Maximum total number of open connections to every memcached server. memcache_pool_socket_timeout = 3 integer value (Optional) Socket timeout in seconds for communicating with a memcached server. memcache_pool_unused_timeout = 60 integer value (Optional) Number of seconds a connection to memcached is held unused in the pool before it is closed. memcache_secret_key = None string value (Optional, mandatory if memcache_security_strategy is defined) This string is used for key derivation. memcache_security_strategy = None string value (Optional) If defined, indicate whether token data should be authenticated or authenticated and encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token data is encrypted and authenticated in the cache. If the value is not one of these options or empty, auth_token will raise an exception on initialization. memcache_use_advanced_pool = True boolean value (Optional) Use the advanced (eventlet safe) memcached client pool. memcached_servers = None list value Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will instead be cached in-process. region_name = None string value The region in which the identity server can be found. service_token_roles = ['service'] list value A choice of roles that must be present in a service token. Service tokens are allowed to request that an expired token can be used and so this check should tightly control that only actual services should be sending this token. Roles here are applied as an ANY check so any role in this list must be present. For backwards compatibility reasons this currently only affects the allow_expired check. service_token_roles_required = False boolean value For backwards compatibility reasons we must let valid service tokens pass that don't pass the service_token_roles check as valid. Setting this true will become the default in a future release and should be enabled if possible. service_type = None string value The name or type of the service as it appears in the service catalog. This is used to validate tokens that have restricted access rules. token_cache_time = 300 integer value In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen tokens for a configurable duration (in seconds). Set to -1 to disable caching completely. www_authenticate_uri = None string value Complete "public" Identity API endpoint. This endpoint should not be an "admin" endpoint, as it should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to authenticate. Although this endpoint should ideally be unversioned, client support in the wild varies. If you're using a versioned v2 endpoint here, then this should not be the same endpoint the service user utilizes for validating tokens, because normal end users may not be able to reach that endpoint. 8.7.11. nova The following table outlines the options available under the [nova] group in the /etc/neutron/neutron.conf file. Table 8.33. nova Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint_type = public string value Type of the nova endpoint to use. This endpoint will be looked up in the keystone catalog and should be one of public, internal or admin. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region_name = None string value Name of nova region to use. Useful if keystone manages more than one region. split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username 8.7.12. oslo_concurrency The following table outlines the options available under the [oslo_concurrency] group in the /etc/neutron/neutron.conf file. Table 8.34. oslo_concurrency Configuration option = Default value Type Description disable_process_locking = False boolean value Enables or disables inter-process locks. lock_path = None string value Directory to use for lock files. For security, the specified directory should only be writable by the user running the processes that need locking. Defaults to environment variable OSLO_LOCK_PATH. If external locks are used, a lock path must be set. 8.7.13. oslo_messaging_amqp The following table outlines the options available under the [oslo_messaging_amqp] group in the /etc/neutron/neutron.conf file. Table 8.35. oslo_messaging_amqp Configuration option = Default value Type Description addressing_mode = dynamic string value Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-routable addressing routable - use routable addresses dynamic - use legacy addresses if the message bus does not support routing otherwise use routable addressing anycast_address = anycast string value Appended to the address prefix when sending to a group of consumers. Used by the message bus to identify messages that should be delivered in a round-robin fashion across consumers. broadcast_prefix = broadcast string value address prefix used when broadcasting to all servers connection_retry_backoff = 2 integer value Increase the connection_retry_interval by this many seconds after each unsuccessful failover attempt. connection_retry_interval = 1 integer value Seconds to pause before attempting to re-connect. connection_retry_interval_max = 30 integer value Maximum limit for connection_retry_interval + connection_retry_backoff container_name = None string value Name for the AMQP container. must be globally unique. Defaults to a generated UUID default_notification_exchange = None string value Exchange name used in notification addresses. Exchange name resolution precedence: Target.exchange if set else default_notification_exchange if set else control_exchange if set else notify default_notify_timeout = 30 integer value The deadline for a sent notification message delivery. Only used when caller does not provide a timeout expiry. default_reply_retry = 0 integer value The maximum number of attempts to re-send a reply message which failed due to a recoverable error. default_reply_timeout = 30 integer value The deadline for an rpc reply message delivery. default_rpc_exchange = None string value Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange if set else default_rpc_exchange if set else control_exchange if set else rpc default_send_timeout = 30 integer value The deadline for an rpc cast or call message delivery. Only used when caller does not provide a timeout expiry. default_sender_link_timeout = 600 integer value The duration to schedule a purge of idle sender links. Detach link after expiry. group_request_prefix = unicast string value address prefix when sending to any server in group idle_timeout = 0 integer value Timeout for inactive connections (in seconds) link_retry_delay = 10 integer value Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error. multicast_address = multicast string value Appended to the address prefix when sending a fanout message. Used by the message bus to identify fanout messages. notify_address_prefix = openstack.org/om/notify string value Address prefix for all generated Notification addresses notify_server_credit = 100 integer value Window size for incoming Notification messages pre_settled = ['rpc-cast', 'rpc-reply'] multi valued Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails. Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply - send RPC Replies pre-settled rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled pseudo_vhost = True boolean value Enable virtual host support for those message buses that do not natively support virtual hosting (such as qpidd). When set to true the virtual host name will be added to all message bus addresses, effectively creating a private subnet per virtual host. Set to False if the message bus supports virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the virtual host. reply_link_credit = 200 integer value Window size for incoming RPC Reply messages. rpc_address_prefix = openstack.org/om/rpc string value Address prefix for all generated RPC addresses rpc_server_credit = 100 integer value Window size for incoming RPC Request messages `sasl_config_dir = ` string value Path to directory that contains the SASL configuration `sasl_config_name = ` string value Name of configuration file (without .conf suffix) `sasl_default_realm = ` string value SASL realm to use if no realm present in username `sasl_mechanisms = ` string value Space separated list of acceptable SASL mechanisms server_request_prefix = exclusive string value address prefix used when sending to a specific server ssl = False boolean value Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the system's CA-bundle to verify the server's certificate. `ssl_ca_file = ` string value CA certificate PEM file used to verify the server's certificate `ssl_cert_file = ` string value Self-identifying certificate PEM file for client authentication `ssl_key_file = ` string value Private key PEM file used to sign ssl_cert_file certificate (optional) ssl_key_password = None string value Password for decrypting ssl_key_file (if encrypted) ssl_verify_vhost = False boolean value By default SSL checks that the name in the server's certificate matches the hostname in the transport_url. In some configurations it may be preferable to use the virtual hostname instead, for example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a certificate per virtual host. Set ssl_verify_vhost to True if the server's SSL certificate uses the virtual host name instead of the DNS name. trace = False boolean value Debug: dump AMQP frames to stdout unicast_address = unicast string value Appended to the address prefix when sending to a particular RPC/Notification server. Used by the message bus to identify messages sent to a single destination. 8.7.14. oslo_messaging_kafka The following table outlines the options available under the [oslo_messaging_kafka] group in the /etc/neutron/neutron.conf file. Table 8.36. oslo_messaging_kafka Configuration option = Default value Type Description compression_codec = none string value The compression codec for all data generated by the producer. If not set, compression will not be used. Note that the allowed values of this depend on the kafka version conn_pool_min_size = 2 integer value The pool size limit for connections expiration policy conn_pool_ttl = 1200 integer value The time-to-live in sec of idle connections in the pool consumer_group = oslo_messaging_consumer string value Group id for Kafka consumer. Consumers in one group will coordinate message consumption enable_auto_commit = False boolean value Enable asynchronous consumer commits kafka_consumer_timeout = 1.0 floating point value Default timeout(s) for Kafka consumers kafka_max_fetch_bytes = 1048576 integer value Max fetch bytes of Kafka consumer max_poll_records = 500 integer value The maximum number of records returned in a poll call pool_size = 10 integer value Pool Size for Kafka Consumers producer_batch_size = 16384 integer value Size of batch for the producer async send producer_batch_timeout = 0.0 floating point value Upper bound on the delay for KafkaProducer batching in seconds sasl_mechanism = PLAIN string value Mechanism when security protocol is SASL security_protocol = PLAINTEXT string value Protocol used to communicate with brokers `ssl_cafile = ` string value CA certificate PEM file used to verify the server certificate `ssl_client_cert_file = ` string value Client certificate PEM file used for authentication. `ssl_client_key_file = ` string value Client key PEM file used for authentication. `ssl_client_key_password = ` string value Client key password file used for authentication. 8.7.15. oslo_messaging_notifications The following table outlines the options available under the [oslo_messaging_notifications] group in the /etc/neutron/neutron.conf file. Table 8.37. oslo_messaging_notifications Configuration option = Default value Type Description driver = [] multi valued The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, routing, log, test, noop retry = -1 integer value The maximum number of attempts to re-send a notification message which failed to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite topics = ['notifications'] list value AMQP topic used for OpenStack notifications. transport_url = None string value A URL representing the messaging driver to use for notifications. If not set, we fall back to the same configuration used for RPC. 8.7.16. oslo_messaging_rabbit The following table outlines the options available under the [oslo_messaging_rabbit] group in the /etc/neutron/neutron.conf file. Table 8.38. oslo_messaging_rabbit Configuration option = Default value Type Description amqp_auto_delete = False boolean value Auto-delete queues in AMQP. amqp_durable_queues = False boolean value Use durable queues in AMQP. If rabbit_quorum_queue is enabled, queues will be durable and this value will be ignored. direct_mandatory_flag = True boolean value (DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is used as reply, so the MessageUndeliverable exception is raised in case the client queue does not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality anymore enable_cancel_on_failover = False boolean value Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen queue is down heartbeat_in_pthread = False boolean value Run the health check heartbeat thread through a native python thread by default. If this option is equal to False then the health check heartbeat will inherit the execution model from the parent process. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet then the heartbeat will be run through a green thread. This option should be set to True only for the wsgi services. heartbeat_rate = 2 integer value How often times during the heartbeat_timeout_threshold we check the heartbeat. heartbeat_timeout_threshold = 60 integer value Number of seconds after which the Rabbit broker is considered down if heartbeat's keep-alive fails (0 disables heartbeat). kombu_compression = None string value EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This option may not be available in future versions. kombu_failover_strategy = round-robin string value Determines how the RabbitMQ node is chosen in case the one we are currently connected to becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config. kombu_missing_consumer_retry_timeout = 60 integer value How long to wait a missing client before abandoning to send it its replies. This value should not be longer than rpc_response_timeout. kombu_reconnect_delay = 1.0 floating point value How long to wait (in seconds) before reconnecting in response to an AMQP consumer cancel notification. rabbit_ha_queues = False boolean value Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-policy argument when declaring a queue. If you just want to make sure that all queues (except those with auto-generated names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA ^(?!amq\.).* {"ha-mode": "all"} " rabbit_interval_max = 30 integer value Maximum interval of RabbitMQ connection retries. Default is 30 seconds. rabbit_login_method = AMQPLAIN string value The RabbitMQ login method. rabbit_qos_prefetch_count = 0 integer value Specifies the number of messages to prefetch. Setting to zero allows unlimited messages. rabbit_quorum_delivery_limit = 0 integer value Each time a message is redelivered to a consumer, a counter is incremented. Once the redelivery count exceeds the delivery limit the message gets dropped or dead-lettered (if a DLX exchange has been configured) Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. rabbit_quorum_max_memory_bytes = 0 integer value By default all messages are maintained in memory if a quorum queue grows in length it can put memory pressure on a cluster. This option can limit the number of memory bytes used by the quorum queue. Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. rabbit_quorum_max_memory_length = 0 integer value By default all messages are maintained in memory if a quorum queue grows in length it can put memory pressure on a cluster. This option can limit the number of messages in the quorum queue. Used only when rabbit_quorum_queue is enabled, Default 0 which means dont set a limit. rabbit_quorum_queue = False boolean value Use quorum queues in RabbitMQ (x-queue-type: quorum). The quorum queue is a modern queue type for RabbitMQ implementing a durable, replicated FIFO queue based on the Raft consensus algorithm. It is available as of RabbitMQ 3.8.0. If set this option will conflict with the HA queues ( rabbit_ha_queues ) aka mirrored queues, in other words the HA queues should be disabled, quorum queues durable by default so the amqp_durable_queues opion is ignored when this option enabled. rabbit_retry_backoff = 2 integer value How long to backoff for between retries when connecting to RabbitMQ. rabbit_retry_interval = 1 integer value How frequently to retry connecting with RabbitMQ. rabbit_transient_queues_ttl = 1800 integer value Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are unused for the duration of the TTL are automatically deleted. The parameter affects only reply and fanout queues. ssl = False boolean value Connect over SSL. `ssl_ca_file = ` string value SSL certification authority file (valid only if SSL enabled). `ssl_cert_file = ` string value SSL cert file (valid only if SSL enabled). ssl_enforce_fips_mode = False boolean value Global toggle for enforcing the OpenSSL FIPS mode. This feature requires Python support. This is available in Python 3.9 in all environments and may have been backported to older Python versions on select environments. If the Python executable used does not support OpenSSL FIPS mode, an exception will be raised. `ssl_key_file = ` string value SSL key file (valid only if SSL enabled). `ssl_version = ` string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 8.7.17. oslo_middleware The following table outlines the options available under the [oslo_middleware] group in the /etc/neutron/neutron.conf file. Table 8.39. oslo_middleware Configuration option = Default value Type Description enable_proxy_headers_parsing = False boolean value Whether the application is behind a proxy or not. This determines if the middleware should parse the headers or not. 8.7.18. oslo_policy The following table outlines the options available under the [oslo_policy] group in the /etc/neutron/neutron.conf file. Table 8.40. oslo_policy Configuration option = Default value Type Description enforce_new_defaults = False boolean value This option controls whether or not to use old deprecated defaults when evaluating policies. If True , the old deprecated defaults are not going to be evaluated. This means if any existing token is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged to enable this flag along with the enforce_scope flag so that you can get the benefits of new defaults and scope_type together. If False , the deprecated policy check string is logically OR'd with the new policy check string, allowing for a graceful upgrade experience between releases with new policies, which is the default behavior. enforce_scope = False boolean value This option controls whether or not to enforce scope when evaluating policies. If True , the scope of the token used in the request is compared to the scope_types of the policy being enforced. If the scopes do not match, an InvalidScope exception will be raised. If False , a message will be logged informing operators that policies are being invoked with mismatching scope. policy_default_rule = default string value Default rule. Enforced when a requested rule is not found. policy_dirs = ['policy.d'] multi valued Directories where policy configuration files are stored. They can be relative to any directory in the search path defined by the config_dir option, or absolute paths. The file defined by policy_file must exist for these directories to be searched. Missing or empty directories are ignored. policy_file = policy.yaml string value The relative or absolute path of a file that maps roles to permissions for a given service. Relative paths must be specified in relation to the configuration file setting this option. remote_content_type = application/x-www-form-urlencoded string value Content Type to send and receive data for REST based policy check remote_ssl_ca_crt_file = None string value Absolute path to ca cert file for REST based policy check remote_ssl_client_crt_file = None string value Absolute path to client cert for REST based policy check remote_ssl_client_key_file = None string value Absolute path client key file REST based policy check remote_ssl_verify_server_crt = False boolean value server identity verification for REST based policy check 8.7.19. oslo_reports The following table outlines the options available under the [oslo_reports] group in the /etc/neutron/neutron.conf file. Table 8.41. oslo_reports Configuration option = Default value Type Description file_event_handler = None string value The path to a file to watch for changes to trigger the reports, instead of signals. Setting this option disables the signal trigger for the reports. If application is running as a WSGI application it is recommended to use this instead of signals. file_event_handler_interval = 1 integer value How many seconds to wait between polls when file_event_handler is set log_dir = None string value Path to a log directory where to create a file 8.7.20. placement The following table outlines the options available under the [placement] group in the /etc/neutron/neutron.conf file. Table 8.42. placement Configuration option = Default value Type Description auth-url = None string value Authentication URL auth_type = None string value Authentication type to load cafile = None string value PEM encoded Certificate Authority to use when verifying HTTPs connections. certfile = None string value PEM encoded client certificate cert file collect-timing = False boolean value Collect per-API call timing information. default-domain-id = None string value Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. default-domain-name = None string value Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and project domain in v3 and ignored in v2 authentication. domain-id = None string value Domain ID to scope to domain-name = None string value Domain name to scope to endpoint_type = public string value Type of the placement endpoint to use. This endpoint will be looked up in the keystone catalog and should be one of public, internal or admin. insecure = False boolean value Verify HTTPS connections. keyfile = None string value PEM encoded client certificate key file password = None string value User's password project-domain-id = None string value Domain ID containing project project-domain-name = None string value Domain name containing project project-id = None string value Project ID to scope to project-name = None string value Project name to scope to region_name = None string value Name of placement region to use. Useful if keystone manages more than one region. split-loggers = False boolean value Log requests to multiple loggers. system-scope = None string value Scope for system operations tenant-id = None string value Tenant ID tenant-name = None string value Tenant Name timeout = None integer value Timeout value for http requests trust-id = None string value ID of the trust to use as a trustee use user-domain-id = None string value User's domain id user-domain-name = None string value User's domain name user-id = None string value User id username = None string value Username 8.7.21. privsep The following table outlines the options available under the [privsep] group in the /etc/neutron/neutron.conf file. Table 8.43. privsep Configuration option = Default value Type Description capabilities = [] list value List of Linux capabilities retained by the privsep daemon. group = None string value Group that the privsep daemon should run as. helper_command = None string value Command to invoke to start the privsep daemon if not using the "fork" method. If not specified, a default is generated using "sudo privsep-helper" and arguments designed to recreate the current configuration. This command must accept suitable --privsep_context and --privsep_sock_path arguments. logger_name = oslo_privsep.daemon string value Logger name to use for this privsep context. By default all contexts log with oslo_privsep.daemon. thread_pool_size = <based on operating system> integer value The number of threads available for privsep to concurrently run processes. Defaults to the number of CPU cores in the system. user = None string value User that the privsep daemon should run as. 8.7.22. profiler The following table outlines the options available under the [profiler] group in the /etc/neutron/neutron.conf file. Table 8.44. profiler Configuration option = Default value Type Description connection_string = messaging:// string value Connection string for a notifier backend. Default value is messaging:// which sets the notifier to oslo_messaging. Examples of possible values: messaging:// - use oslo_messaging driver for sending spans. redis://127.0.0.1:6379 - use redis driver for sending spans. mongodb://127.0.0.1:27017 - use mongodb driver for sending spans. elasticsearch://127.0.0.1:9200 - use elasticsearch driver for sending spans. jaeger://127.0.0.1:6831 - use jaeger tracing as driver for sending spans. enabled = False boolean value Enable the profiling for all services on this node. Default value is False (fully disable the profiling feature). Possible values: True: Enables the feature False: Disables the feature. The profiling cannot be started via this project operations. If the profiling is triggered by another project, this project part will be empty. es_doc_type = notification string value Document type for notification indexing in elasticsearch. es_scroll_size = 10000 integer value Elasticsearch splits large requests in batches. This parameter defines maximum size of each batch (for example: es_scroll_size=10000). es_scroll_time = 2m string value This parameter is a time value parameter (for example: es_scroll_time=2m), indicating for how long the nodes that participate in the search will maintain relevant resources in order to continue and support it. filter_error_trace = False boolean value Enable filter traces that contain error/exception to a separated place. Default value is set to False. Possible values: True: Enable filter traces that contain error/exception. False: Disable the filter. hmac_keys = SECRET_KEY string value Secret key(s) to use for encrypting context data for performance profiling. This string value should have the following format: <key1>[,<key2>,... <keyn>], where each key is some random string. A user who triggers the profiling via the REST API has to set one of these keys in the headers of the REST API call to include profiling results of this node for this particular project. Both "enabled" flag and "hmac_keys" config options should be set to enable profiling. Also, to generate correct profiling information across all services at least one key needs to be consistent between OpenStack projects. This ensures it can be used from client side to generate the trace, containing information from all possible resources. sentinel_service_name = mymaster string value Redissentinel uses a service name to identify a master redis service. This parameter defines the name (for example: sentinal_service_name=mymaster ). socket_timeout = 0.1 floating point value Redissentinel provides a timeout option on the connections. This parameter defines that timeout (for example: socket_timeout=0.1). trace_sqlalchemy = False boolean value Enable SQL requests profiling in services. Default value is False (SQL requests won't be traced). Possible values: True: Enables SQL requests profiling. Each SQL query will be part of the trace and can the be analyzed by how much time was spent for that. False: Disables SQL requests profiling. The spent time is only shown on a higher level of operations. Single SQL queries cannot be analyzed this way. 8.7.23. quotas The following table outlines the options available under the [quotas] group in the /etc/neutron/neutron.conf file. Table 8.45. quotas Configuration option = Default value Type Description default_quota = -1 integer value Default number of resource allowed per tenant. A negative value means unlimited. quota_driver = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver string value Default driver to use for quota checks. quota_floatingip = 50 integer value Number of floating IPs allowed per tenant. A negative value means unlimited. quota_network = 100 integer value Number of networks allowed per tenant. A negative value means unlimited. quota_port = 500 integer value Number of ports allowed per tenant. A negative value means unlimited. quota_router = 10 integer value Number of routers allowed per tenant. A negative value means unlimited. quota_security_group = 10 integer value Number of security groups allowed per tenant. A negative value means unlimited. quota_security_group_rule = 100 integer value Number of security rules allowed per tenant. A negative value means unlimited. quota_subnet = 100 integer value Number of subnets allowed per tenant, A negative value means unlimited. track_quota_usage = True boolean value Keep in track in the database of current resource quota usage. Plugins which do not leverage the neutron database should set this flag to False. 8.7.24. ssl The following table outlines the options available under the [ssl] group in the /etc/neutron/neutron.conf file. Table 8.46. ssl Configuration option = Default value Type Description ca_file = None string value CA certificate file to use to verify connecting clients. cert_file = None string value Certificate file to use when starting the server securely. ciphers = None string value Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format. key_file = None string value Private key file to use when starting the server securely. version = None string value SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions. 8.8. openvswitch_agent.ini This section contains options for the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. 8.8.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.8.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.47. agent Configuration option = Default value Type Description arp_responder = False boolean value Enable local ARP responder if it is supported. Requires OVS 2.1 and ML2 l2population driver. Allows the switch (when supporting an overlay) to respond to an ARP request locally without performing a costly ARP broadcast into the overlay. NOTE: If enable_distributed_routing is set to True then arp_responder will automatically be set to True in the agent, regardless of the setting in the config file. baremetal_smartnic = False boolean value Enable the agent to process Smart NIC ports. dont_fragment = True boolean value Set or un-set the don't fragment (DF) bit on outgoing IP packet carrying GRE/VXLAN tunnel. drop_flows_on_start = False boolean value Reset flow table on start. Setting this to True will cause brief traffic interruption. enable_distributed_routing = False boolean value Make the l2 agent run in DVR mode. explicitly_egress_direct = False boolean value When set to True, the accepted egress unicast traffic will not use action NORMAL. The accepted egress packets will be taken care of in the final egress tables direct output flows for unicast traffic. This will aslo change the pipleline for ingress traffic to ports without security, the final output action will be hit in table 94. extensions = [] list value Extensions list to use l2_population = False boolean value Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve tunnel scalability. minimize_polling = True boolean value Minimize polling by monitoring ovsdb for interface changes. ovsdb_monitor_respawn_interval = 30 integer value The number of seconds to wait before respawning the ovsdb monitor after losing communication with it. tunnel_csum = False boolean value Set or un-set the tunnel header checksum on outgoing IP packet carrying GRE/VXLAN tunnel. tunnel_types = [] list value Network types supported by the agent (gre, vxlan and/or geneve). veth_mtu = 9000 integer value MTU size of veth interfaces Deprecated since: Yoga *Reason:*This parameter has had no effect since the Wallaby release. vxlan_udp_port = 4789 port value The UDP port to use for VXLAN tunnels. 8.8.3. dhcp The following table outlines the options available under the [dhcp] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.48. dhcp Configuration option = Default value Type Description dhcp_rebinding_time = 0 integer value DHCP rebinding time T2 (in seconds). If set to 0, it will default to 7/8 of the lease time. dhcp_renewal_time = 0 integer value DHCP renewal time T1 (in seconds). If set to 0, it will default to half of the lease time. enable_ipv6 = True boolean value When set to True, the OVS agent DHCP extension will add related flows for DHCPv6 packets. 8.8.4. network_log The following table outlines the options available under the [network_log] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.49. network_log Configuration option = Default value Type Description burst_limit = 25 integer value Maximum number of packets per rate_limit. local_output_log_base = None string value Output logfile path on agent side, default syslog file. rate_limit = 100 integer value Maximum packets logging per second. 8.8.5. ovs The following table outlines the options available under the [ovs] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.50. ovs Configuration option = Default value Type Description bridge_mappings = [] list value Comma-separated list of <physical_network>:<bridge> tuples mapping physical network names to the agent's node-specific Open vSwitch bridge names to be used for flat and VLAN networks. The length of bridge names should be no more than 11. Each bridge must exist, and should have a physical network interface configured as a port. All physical networks configured on the server should have mappings to appropriate bridges on each agent. Note: If you remove a bridge from this mapping, make sure to disconnect it from the integration bridge as it won't be managed by the agent anymore. datapath_type = system string value OVS datapath to use. system is the default value and corresponds to the kernel datapath. To enable the userspace datapath set this value to netdev . int_peer_patch_port = patch-tun string value Peer patch port in integration bridge for tunnel bridge. integration_bridge = br-int string value Integration bridge to use. Do not change this parameter unless you have a good reason to. This is the name of the OVS integration bridge. There is one per hypervisor. The integration bridge acts as a virtual patch bay . All VM VIFs are attached to this bridge and then patched according to their network connectivity. local_ip = None IP address value IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or IPv6 address that resides on one of the host network interfaces. The IP version of this value must match the value of the overlay_ip_version option in the ML2 plug-in configuration file on the neutron server node(s). of_connect_timeout = 300 integer value Timeout in seconds to wait for the local switch connecting the controller. of_inactivity_probe = 10 integer value The inactivity_probe interval in seconds for the local switch connection to the controller. A value of 0 disables inactivity probes. of_listen_address = 127.0.0.1 IP address value Address to listen on for OpenFlow connections. of_listen_port = 6633 port value Port to listen on for OpenFlow connections. of_request_timeout = 300 integer value Timeout in seconds to wait for a single OpenFlow request. openflow_processed_per_port = False boolean value If enabled, all OpenFlow rules associated to a port are processed at once, in one single transaction. That avoids possible inconsistencies during OVS agent restart and port updates. If disabled, the flows will be processed in batches of _constants.AGENT_RES_PROCESSING_STEP number of OpenFlow rules. ovsdb_connection = tcp:127.0.0.1:6640 string value The connection string for the OVSDB backend. Will be used for all ovsdb commands and by ovsdb-client when monitoring ovsdb_debug = False boolean value Enable OVSDB debug logs resource_provider_bandwidths = [] list value Comma-separated list of <bridge>:<egress_bw>:<ingress_bw> tuples, showing the available bandwidth for the given bridge in the given direction. The direction is meant from VM perspective. Bandwidth is measured in kilobits per second (kbps). The bridge must appear in bridge_mappings as the value. But not all bridges in bridge_mappings must be listed here. For a bridge not listed here we neither create a resource provider in placement nor report inventories against. An omitted direction means we do not report an inventory for the corresponding class. resource_provider_default_hypervisor = None string value The default hypervisor name used to locate the parent of the resource provider. If this option is not set, canonical name is used resource_provider_hypervisors = {} dict value Mapping of bridges to hypervisors: <bridge>:<hypervisor>,... hypervisor name is used to locate the parent of the resource provider tree. Only needs to be set in the rare case when the hypervisor name is different from the resource_provider_default_hypervisor config option value as known by the nova-compute managing that hypervisor. resource_provider_inventory_defaults = {'allocation_ratio': 1.0, 'min_unit': 1, 'reserved': 0, 'step_size': 1} dict value Key:value pairs to specify defaults used while reporting resource provider inventories. Possible keys with their types: allocation_ratio:float, max_unit:int, min_unit:int, reserved:int, step_size:int, See also: https://docs.openstack.org/api-ref/placement/#update-resource-provider-inventories resource_provider_packet_processing_inventory_defaults = {'allocation_ratio': 1.0, 'min_unit': 1, 'reserved': 0, 'step_size': 1} dict value Key:value pairs to specify defaults used while reporting packet rate inventories. Possible keys with their types: allocation_ratio:float, max_unit:int, min_unit:int, reserved:int, step_size:int, See also: https://docs.openstack.org/api-ref/placement/#update-resource-provider-inventories resource_provider_packet_processing_with_direction = [] list value Similar to the resource_provider_packet_processing_without_direction but used in case the OVS backend has hardware offload capabilities. In this case the format is <hypervisor>:<egress_pkt_rate>:<ingress_pkt_rate> which allows defining packet processing capacity per traffic direction. The direction is meant from the VM perspective. Note that the resource_provider_packet_processing_without_direction and the resource_provider_packet_processing_with_direction are mutually exclusive options. resource_provider_packet_processing_without_direction = [] list value Comma-separated list of <hypervisor>:<packet_rate> tuples, defining the minimum packet rate the OVS backend can guarantee in kilo (1000) packet per second. The hypervisor name is used to locate the parent of the resource provider tree. Only needs to be set in the rare case when the hypervisor name is different from the DEFAULT.host config option value as known by the nova-compute managing that hypervisor or if multiple hypervisors are served by the same OVS backend. The default is :0 which means no packet processing capacity is guaranteed on the hypervisor named according to DEFAULT.host. ssl_ca_cert_file = None string value The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_cert_file = None string value The SSL certificate file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection ssl_key_file = None string value The SSL private key file to use when interacting with OVSDB. Required when using an "ssl:" prefixed ovsdb_connection tun_peer_patch_port = patch-int string value Peer patch port in tunnel bridge for integration bridge. tunnel_bridge = br-tun string value Tunnel bridge to use. vhostuser_socket_dir = /var/run/openvswitch string value OVS vhost-user socket directory. 8.8.6. securitygroup The following table outlines the options available under the [securitygroup] group in the /etc/neutron/plugins/ml2/openvswitch_agent.ini file. Table 8.51. securitygroup Configuration option = Default value Type Description enable_ipset = True boolean value Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset is installed on L2 agent node. enable_security_group = True boolean value Controls whether the neutron security group API is enabled in the server. It should be false when using no security groups or using the nova security group API. firewall_driver = None string value Driver for security groups firewall in the L2 agent permitted_ethertypes = [] list value Comma-separated list of ethertypes to be permitted, in hexadecimal (starting with "0x"). For example, "0x4008" to permit InfiniBand. 8.9. sriov_agent.ini This section contains options for the /etc/neutron/plugins/ml2/sriov_agent.ini file. 8.9.1. DEFAULT The following table outlines the options available under the [DEFAULT] group in the /etc/neutron/plugins/ml2/sriov_agent.ini file. . Configuration option = Default value Type Description debug = False boolean value If set to true, the logging level will be set to DEBUG instead of the default INFO level. default_log_levels = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO'] list value List of package logging levels in logger=LEVEL pairs. This option is ignored if log_config_append is set. fatal_deprecations = False boolean value Enables or disables fatal status of deprecations. `instance_format = [instance: %(uuid)s] ` string value The format for an instance that is passed with the log message. `instance_uuid_format = [instance: %(uuid)s] ` string value The format for an instance UUID that is passed with the log message. log-config-append = None string value The name of a logging configuration file. This file is appended to any existing logging configuration files. For details about logging configuration files, see the Python logging module documentation. Note that when logging configuration files are used then all logging configuration is set in the configuration file and other logging configuration options are ignored (for example, log-date-format). log-date-format = %Y-%m-%d %H:%M:%S string value Defines the format string for %%(asctime)s in log records. Default: %(default)s . This option is ignored if log_config_append is set. log-dir = None string value (Optional) The base directory used for relative log_file paths. This option is ignored if log_config_append is set. log-file = None string value (Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr as defined by use_stderr. This option is ignored if log_config_append is set. log_rotate_interval = 1 integer value The amount of time before the log files are rotated. This option is ignored unless log_rotation_type is set to "interval". log_rotate_interval_type = days string value Rotation interval type. The time of the last file change (or the time when the service was started) is used when scheduling the rotation. log_rotation_type = none string value Log rotation type. logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s string value Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d string value Additional data to append to log message when logging level for the message is DEBUG. Used by oslo_log.formatters.ContextFormatter logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s string value Format string to use for log messages when context is undefined. Used by oslo_log.formatters.ContextFormatter logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s string value Prefix each line of exception output with this format. Used by oslo_log.formatters.ContextFormatter logging_user_identity_format = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s string value Defines the format string for %(user_identity)s that is used in logging_context_format_string. Used by oslo_log.formatters.ContextFormatter max_logfile_count = 30 integer value Maximum number of rotated log files. max_logfile_size_mb = 200 integer value Log file maximum size in MB. This option is ignored if "log_rotation_type" is not set to "size". publish_errors = False boolean value Enables or disables publication of error events. rate_limit_burst = 0 integer value Maximum number of logged messages per rate_limit_interval. rate_limit_except_level = CRITICAL string value Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string means that all levels are filtered. rate_limit_interval = 0 integer value Interval, number of seconds, of log rate limiting. rpc_response_max_timeout = 600 integer value Maximum seconds to wait for a response from an RPC call. syslog-log-facility = LOG_USER string value Syslog facility to receive log lines. This option is ignored if log_config_append is set. use-journal = False boolean value Enable journald for logging. If running in a systemd environment you may wish to enable journal support. Doing so will use the journal native protocol which includes structured metadata in addition to log messages.This option is ignored if log_config_append is set. use-json = False boolean value Use JSON formatting for logging. This option is ignored if log_config_append is set. use-syslog = False boolean value Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to honor RFC5424. This option is ignored if log_config_append is set. use_eventlog = False boolean value Log output to Windows Event Log. use_stderr = False boolean value Log output to standard error. This option is ignored if log_config_append is set. watch-log-file = False boolean value Uses logging handler designed to watch file system. When log file is moved or removed this handler will open a new log file with specified path instantaneously. It makes sense only if log_file option is specified and Linux platform is used. This option is ignored if log_config_append is set. 8.9.2. agent The following table outlines the options available under the [agent] group in the /etc/neutron/plugins/ml2/sriov_agent.ini file. Table 8.52. agent Configuration option = Default value Type Description extensions = [] list value Extensions list to use 8.9.3. sriov_nic The following table outlines the options available under the [sriov_nic] group in the /etc/neutron/plugins/ml2/sriov_agent.ini file. Table 8.53. sriov_nic Configuration option = Default value Type Description exclude_devices = [] list value Comma-separated list of <network_device>:<vfs_to_exclude> tuples, mapping network_device to the agent's node-specific list of virtual functions that should not be used for virtual networking. vfs_to_exclude is a semicolon-separated list of virtual functions to exclude from network_device. The network_device in the mapping should appear in the physical_device_mappings list. physical_device_mappings = [] list value Comma-separated list of <physical_network>:<network_device> tuples mapping physical network names to the agent's node-specific physical network device interfaces of SR-IOV physical function to be used for VLAN networks. All physical networks listed in network_vlan_ranges on the server should have mappings to appropriate interfaces on each agent. resource_provider_bandwidths = [] list value Comma-separated list of <network_device>:<egress_bw>:<ingress_bw> tuples, showing the available bandwidth for the given device in the given direction. The direction is meant from VM perspective. Bandwidth is measured in kilobits per second (kbps). The device must appear in physical_device_mappings as the value. But not all devices in physical_device_mappings must be listed here. For a device not listed here we neither create a resource provider in placement nor report inventories against. An omitted direction means we do not report an inventory for the corresponding class. resource_provider_default_hypervisor = None string value The default hypervisor name used to locate the parent of the resource provider. If this option is not set, canonical name is used resource_provider_hypervisors = {} dict value Mapping of network devices to hypervisors: <network_device>:<hypervisor>,... hypervisor name is used to locate the parent of the resource provider tree. Only needs to be set in the rare case when the hypervisor name is different from the resource_provider_default_hypervisor config option value as known by the nova-compute managing that hypervisor. resource_provider_inventory_defaults = {'allocation_ratio': 1.0, 'min_unit': 1, 'reserved': 0, 'step_size': 1} dict value Key:value pairs to specify defaults used while reporting resource provider inventories. Possible keys with their types: allocation_ratio:float, max_unit:int, min_unit:int, reserved:int, step_size:int, See also: https://docs.openstack.org/api-ref/placement/#update-resource-provider-inventories
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuration_reference/neutron_2
Chapter 16. Configuring logging
Chapter 16. Configuring logging Red Hat build of Keycloak uses the JBoss Logging framework. The following is a high-level overview for the available log handlers with the common parent log handler root : console file syslog 16.1. Logging configuration Logging is done on a per-category basis in Red Hat build of Keycloak. You can configure logging for the root log level or for more specific categories such as org.hibernate or org.keycloak . It is also possible to tailor log levels for each particular log handler. This chapter describes how to configure logging. 16.1.1. Log levels The following table defines the available log levels. Level Description FATAL Critical failures with complete inability to serve any kind of request. ERROR A significant error or problem leading to the inability to process requests. WARN A non-critical error or problem that might not require immediate correction. INFO Red Hat build of Keycloak lifecycle events or important information. Low frequency. DEBUG More detailed information for debugging purposes, such as database logs. Higher frequency. TRACE Most detailed debugging information. Very high frequency. ALL Special level for all log messages. OFF Special level to turn logging off entirely (not recommended). 16.1.2. Configuring the root log level When no log level configuration exists for a more specific category logger, the enclosing category is used instead. When there is no enclosing category, the root logger level is used. To set the root log level, enter the following command: bin/kc.[sh|bat] start --log-level=<root-level> Use these guidelines for this command: For <root-level> , supply a level defined in the preceding table. The log level is case-insensitive. For example, you could either use DEBUG or debug . If you were to accidentally set the log level twice, the last occurrence in the list becomes the log level. For example, if you included the syntax --log-level="info,... ,DEBUG,... " , the root logger would be DEBUG . 16.1.3. Configuring category-specific log levels You can set different log levels for specific areas in Red Hat build of Keycloak. Use this command to provide a comma-separated list of categories for which you want a different log level: bin/kc.[sh|bat] start --log-level="<root-level>,<org.category1>:<org.category1-level>" A configuration that applies to a category also applies to its sub-categories unless you include a more specific matching sub-category. Example bin/kc.[sh|bat] start --log-level="INFO,org.hibernate:debug,org.hibernate.hql.internal.ast:info" This example sets the following log levels: Root log level for all loggers is set to INFO. The hibernate log level in general is set to debug. To keep SQL abstract syntax trees from creating verbose log output, the specific subcategory org.hibernate.hql.internal.ast is set to info. As a result, the SQL abstract syntax trees are omitted instead of appearing at the debug level. 16.2. Enabling log handlers To enable log handlers, enter the following command: bin/kc.[sh|bat] start --log="<handler1>,<handler2>" The available handlers are: console file syslog The more specific handler configuration mentioned below will only take effect when the handler is added to this comma-separated list. 16.3. Specify log level for each handler The log-level property specifies the global root log level and levels for selected categories. However, a more fine-grained approach for log levels is necessary to comply with the modern application requirements. To set log levels for particular handlers, properties in format log-<handler>-level (where <handler> is available log handler) were introduced. It means properties for log level settings look like this: log-console-level - Console log handler log-file-level - File log handler log-syslog-level - Syslog log handler Note The log-<handler>-level properties are available only when the particular log handlers are enabled. More information in log handlers settings below. Only log levels specified in Section 16.1.1, "Log levels" section are accepted, and must be in lowercase . There is no support for specifying particular categories for log handlers yet. 16.3.1. General principle It is necessary to understand that setting the log levels for each particular handler does not override the root level specified in the log-level property. Log handlers respect the root log level, which represents the maximal verbosity for the whole logging system. It means individual log handlers can be configured to be less verbose than the root logger, but not more. Specifically, when an arbitrary log level is defined for the handler, it does not mean the log records with the log level will be present in the output. In that case, the root log-level must also be assessed. Log handler levels provide the restriction for the root log level , and the default log level for log handlers is all - without any restriction. 16.3.2. Examples Example: debug for file handler, but info for console handler: bin/kc.[sh|bat] start --log=console,file --log-level=debug --log-console-level=info The root log level is set to debug , so every log handler inherits the value - so does the file log handler. To hide debug records in the console, we need to set the minimal (least severe) level to info for the console handler. Example: warn for all handlers, but debug for file handler: bin/kc.[sh|bat] start --log=console,file,syslog --log-level=debug --log-console-level=warn --log-syslog-level=warn The root level must be set to the most verbose required level ( debug in this case), and other log handlers must be amended accordingly. Example: info for all handlers, but debug + org.keycloak.events:trace for Syslog handler: bin/kc.[sh|bat] start --log=console,file,syslog --log-level=debug,org.keycloak.events:trace, --log-syslog-level=trace --log-console-level=info --log-file-level=info In order to see the org.keycloak.events:trace , the trace level must be set for the Syslog handler. 16.4. Console log handler The console log handler is enabled by default, providing unstructured log messages for the console. 16.4.1. Configuring the console log format Red Hat build of Keycloak uses a pattern-based logging formatter that generates human-readable text logs by default. The logging format template for these lines can be applied at the root level. The default format template is: %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n The format string supports the symbols in the following table: Symbol Summary Description %% % Renders a simple % character. %c Category Renders the log category name. %d{xxx} Date Renders a date with the given date format string.String syntax defined by java.text.SimpleDateFormat %e Exception Renders a thrown exception. %h Hostname Renders the simple host name. %H Qualified host name Renders the fully qualified hostname, which may be the same as the simple host name, depending on the OS configuration. %i Process ID Renders the current process PID. %m Full Message Renders the log message and an exception, if thrown. %n Newline Renders the platform-specific line separator string. %N Process name Renders the name of the current process. %p Level Renders the log level of the message. %r Relative time Render the time in milliseconds since the start of the application log. %s Simple message Renders only the log message without exception trace. %t Thread name Renders the thread name. %t{id} Thread ID Render the thread ID. %z{<zone name>} Timezone Set the time zone of log output to <zone name>. %L Line number Render the line number of the log message. 16.4.2. Setting the logging format To set the logging format for a logged line, perform these steps: Build your desired format template using the preceding table. Enter the following command: bin/kc.[sh|bat] start --log-console-format="'<format>'" Note that you need to escape characters when invoking commands containing special shell characters such as ; using the CLI. Therefore, consider setting it in the configuration file instead. Example: Abbreviate the fully qualified category name bin/kc.[sh|bat] start --log-console-format="'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'" This example abbreviates the category name to three characters by setting [%c{3.}] in the template instead of the default [%c] . 16.4.3. Configuring JSON or plain console logging By default, the console log handler logs plain unstructured data to the console. To use structured JSON log output instead, enter the following command: bin/kc.[sh|bat] start --log-console-output=json Example Log Message {"timestamp":"2022-02-25T10:31:32.452+01:00","sequence":8442,"loggerClassName":"org.jboss.logging.Logger","loggerName":"io.quarkus","level":"INFO","message":"Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.253s. Listening on: http://0.0.0.0:8080","threadName":"main","threadId":1,"mdc":{},"ndc":"","hostName":"host-name","processName":"QuarkusEntryPoint","processId":36946} When using JSON output, colors are disabled and the format settings set by --log-console-format will not apply. To use unstructured logging, enter the following command: bin/kc.[sh|bat] start --log-console-output=default Example Log Message 16.4.4. Colors Colored console log output for unstructured logs is disabled by default. Colors may improve readability, but they can cause problems when shipping logs to external log aggregation systems. To enable or disable color-coded console log output, enter following command: bin/kc.[sh|bat] start --log-console-color=<false|true> 16.4.5. Configuring the console log level Log level for console log handler can be specified by --log-console-level property as follows: bin/kc.[sh|bat] start --log-console-level=warn For more information, see the section Section 16.3, "Specify log level for each handler" above. 16.5. File logging As an alternative to logging to the console, you can use unstructured logging to a file. 16.5.1. Enable file logging Logging to a file is disabled by default. To enable it, enter the following command: bin/kc.[sh|bat] start --log="console,file" A log file named keycloak.log is created inside the data/log directory of your Red Hat build of Keycloak installation. 16.5.2. Configuring the location and name of the log file To change where the log file is created and the file name, perform these steps: Create a writable directory to store the log file. If the directory is not writable, Red Hat build of Keycloak will start correctly, but it will issue an error and no log file will be created. Enter this command: bin/kc.[sh|bat] start --log="console,file" --log-file=<path-to>/<your-file.log> 16.5.3. Configuring the file handler format To configure a different logging format for the file log handler, enter the following command: bin/kc.[sh|bat] start --log-file-format="<pattern>" See Section 16.4.1, "Configuring the console log format" for more information and a table of the available pattern configuration. 16.5.4. Configuring the file log level Log level for file log handler can be specified by --log-file-level property as follows: bin/kc.[sh|bat] start --log-file-level=warn For more information, see the section Section 16.3, "Specify log level for each handler" above. 16.6. Centralized logging using Syslog Red Hat build of Keycloak provides the ability to send logs to a remote Syslog server. It utilizes the protocol defined in RFC 5424 . 16.6.1. Enable the Syslog handler To enable logging using Syslog, add it to the list of activated log handlers as follows: bin/kc.[sh|bat] start --log="console,syslog" 16.6.2. Configuring the Syslog Application Name To set a different application name, add the --log-syslog-app-name option as follows: bin/kc.[sh|bat] start --log="console,syslog" --log-syslog-app-name=kc-p-itadmins If not set, the application name defaults to keycloak . 16.6.3. Configuring the Syslog endpoint To configure the endpoint( host:port ) of your centralized logging system, enter the following command and substitute the values with your specific values: bin/kc.[sh|bat] start --log="console,syslog" --log-syslog-endpoint=myhost:12345 When the Syslog handler is enabled, the host is using localhost as host value. The Default port is 514 . 16.6.4. Configuring the Syslog log level Log level for Syslog log handler can be specified by --log-syslog-level property as follows: bin/kc.[sh|bat] start --log-syslog-level=warn For more information, see the section Section 16.3, "Specify log level for each handler" above. 16.6.5. Configuring the Syslog protocol Syslog uses TCP as the default protocol for communication. To use UDP instead of TCP, add the --log-syslog-protocol option as follows: bin/kc.[sh|bat] start --log="console,syslog" --log-syslog-protocol=udp The available protocols are: tpc , udp , and ssl-tcp . 16.6.6. Configuring the Syslog log format To set the logging format for a logged line, perform these steps: Build your desired format template using the preceding table. Enter the following command: bin/kc.[sh|bat] start --log-syslog-format="'<format>'" Note that you need to escape characters when invoking commands containing special shell characters such as ; using the CLI. Therefore, consider setting it in the configuration file instead. Example: Abbreviate the fully qualified category name bin/kc.[sh|bat] start --log-syslog-format="'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'" This example abbreviates the category name to three characters by setting [%c{3.}] in the template instead of the default [%c] . 16.6.7. Configuring the Syslog type Syslog uses different message formats based on particular RFC specifications. To change the Syslog type with a different message format, use the --log-syslog-type option as follows: bin/kc.[sh|bat] start --log-syslog-type=rfc3164 Possible values for the --log-syslog-type option are: rfc5424 (default) rfc3164 The preferred Syslog type is RFC 5424 , which obsoletes RFC 3164 , known as BSD Syslog protocol. 16.6.8. Configuring the Syslog maximum message length To set the maximum length of the message allowed to be sent (in bytes), use the --log-syslog-max-length option as follows: bin/kc.[sh|bat] start --log-syslog-max-length=1536 The length can be specified in memory size format with the appropriate suffix, like 1k or 1K . The length includes the header and the message. If the length is not explicitly set, the default values are set based on the --log-syslog-type option as follows: 2048B - for RFC 5424 1024B - for RFC 3164 16.6.9. Configuring the Syslog structured output By default, the Syslog log handler sends plain unstructured data to the Syslog server. To use structured JSON log output instead, enter the following command: bin/kc.[sh|bat] start --log-syslog-output=json Example Log Message 2024-04-05T12:32:20.616+02:00 host keycloak 2788276 io.quarkus - {"timestamp":"2024-04-05T12:32:20.616208533+02:00","sequence":9948,"loggerClassName":"org.jboss.logging.Logger","loggerName":"io.quarkus","level":"INFO","message":"Profile prod activated. ","threadName":"main","threadId":1,"mdc":{},"ndc":"","hostName":"host","processName":"QuarkusEntryPoint","processId":2788276} When using JSON output, colors are disabled and the format settings set by --log-syslog-format will not apply. To use unstructured logging, enter the following command: bin/kc.[sh|bat] start --log-syslog-output=default Example Log Message 2024-04-05T12:31:38.473+02:00 host keycloak 2787568 io.quarkus - 2024-04-05 12:31:38,473 INFO [io.quarkus] (main) Profile prod activated. As you can see, the timestamp is present twice, so you can amend it correspondingly via the --log-syslog-format property. 16.7. Relevant options Value log Enable one or more log handlers in a comma-separated list. CLI: --log Env: KC_LOG console , file , syslog log-console-color Enable or disable colors when logging to console. CLI: --log-console-color Env: KC_LOG_CONSOLE_COLOR Available only when Console log handler is activated true , false (default) log-console-format The format of unstructured console log entries. If the format has spaces in it, escape the value using "<format>". CLI: --log-console-format Env: KC_LOG_CONSOLE_FORMAT Available only when Console log handler is activated %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n (default) log-console-include-trace Include tracing information in the console log. If the log-console-format option is specified, this option has no effect. CLI: --log-console-include-trace Env: KC_LOG_CONSOLE_INCLUDE_TRACE Available only when Console log handler and Tracing is activated true (default), false log-console-level Set the log level for the console handler. It specifies the most verbose log level for logs shown in the output. It respects levels specified in the log-level option, which represents the maximal verbosity for the whole logging system. For more information, check the Logging guide. CLI: --log-console-level Env: KC_LOG_CONSOLE_LEVEL Available only when Console log handler is activated off , fatal , error , warn , info , debug , trace , all (default) log-console-output Set the log output to JSON or default (plain) unstructured logging. CLI: --log-console-output Env: KC_LOG_CONSOLE_OUTPUT Available only when Console log handler is activated default (default), json log-file Set the log file path and filename. CLI: --log-file Env: KC_LOG_FILE Available only when File log handler is activated data/log/keycloak.log (default) log-file-format Set a format specific to file log entries. CLI: --log-file-format Env: KC_LOG_FILE_FORMAT Available only when File log handler is activated %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n (default) log-file-include-trace Include tracing information in the file log. If the log-file-format option is specified, this option has no effect. CLI: --log-file-include-trace Env: KC_LOG_FILE_INCLUDE_TRACE Available only when File log handler and Tracing is activated true (default), false log-file-level Set the log level for the file handler. It specifies the most verbose log level for logs shown in the output. It respects levels specified in the log-level option, which represents the maximal verbosity for the whole logging system. For more information, check the Logging guide. CLI: --log-file-level Env: KC_LOG_FILE_LEVEL Available only when File log handler is activated off , fatal , error , warn , info , debug , trace , all (default) log-file-output Set the log output to JSON or default (plain) unstructured logging. CLI: --log-file-output Env: KC_LOG_FILE_OUTPUT Available only when File log handler is activated default (default), json log-level The log level of the root category or a comma-separated list of individual categories and their levels. For the root category, you don't need to specify a category. CLI: --log-level Env: KC_LOG_LEVEL [info] (default) log-syslog-app-name Set the app name used when formatting the message in RFC5424 format. CLI: --log-syslog-app-name Env: KC_LOG_SYSLOG_APP_NAME Available only when Syslog is activated keycloak (default) log-syslog-endpoint Set the IP address and port of the Syslog server. CLI: --log-syslog-endpoint Env: KC_LOG_SYSLOG_ENDPOINT Available only when Syslog is activated localhost:514 (default) log-syslog-format Set a format specific to Syslog entries. CLI: --log-syslog-format Env: KC_LOG_SYSLOG_FORMAT Available only when Syslog is activated %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n (default) log-syslog-include-trace Include tracing information in the Syslog. If the log-syslog-format option is specified, this option has no effect. CLI: --log-syslog-include-trace Env: KC_LOG_SYSLOG_INCLUDE_TRACE Available only when Syslog handler and Tracing is activated true (default), false log-syslog-level Set the log level for the Syslog handler. It specifies the most verbose log level for logs shown in the output. It respects levels specified in the log-level option, which represents the maximal verbosity for the whole logging system. For more information, check the Logging guide. CLI: --log-syslog-level Env: KC_LOG_SYSLOG_LEVEL Available only when Syslog is activated off , fatal , error , warn , info , debug , trace , all (default) log-syslog-max-length Set the maximum length, in bytes, of the message allowed to be sent. The length includes the header and the message. If not set, the default value is 2048 when log-syslog-type is rfc5424 (default) and 1024 when log-syslog-type is rfc3164. CLI: --log-syslog-max-length Env: KC_LOG_SYSLOG_MAX_LENGTH Available only when Syslog is activated log-syslog-output Set the Syslog output to JSON or default (plain) unstructured logging. CLI: --log-syslog-output Env: KC_LOG_SYSLOG_OUTPUT Available only when Syslog is activated default (default), json log-syslog-protocol Set the protocol used to connect to the Syslog server. CLI: --log-syslog-protocol Env: KC_LOG_SYSLOG_PROTOCOL Available only when Syslog is activated tcp (default), udp , ssl-tcp log-syslog-type Set the Syslog type used to format the sent message. CLI: --log-syslog-type Env: KC_LOG_SYSLOG_TYPE Available only when Syslog is activated rfc5424 (default), rfc3164
[ "bin/kc.[sh|bat] start --log-level=<root-level>", "bin/kc.[sh|bat] start --log-level=\"<root-level>,<org.category1>:<org.category1-level>\"", "bin/kc.[sh|bat] start --log-level=\"INFO,org.hibernate:debug,org.hibernate.hql.internal.ast:info\"", "bin/kc.[sh|bat] start --log=\"<handler1>,<handler2>\"", "bin/kc.[sh|bat] start --log=console,file --log-level=debug --log-console-level=info", "bin/kc.[sh|bat] start --log=console,file,syslog --log-level=debug --log-console-level=warn --log-syslog-level=warn", "bin/kc.[sh|bat] start --log=console,file,syslog --log-level=debug,org.keycloak.events:trace, --log-syslog-level=trace --log-console-level=info --log-file-level=info", "bin/kc.[sh|bat] start --log-console-format=\"'<format>'\"", "bin/kc.[sh|bat] start --log-console-format=\"'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'\"", "bin/kc.[sh|bat] start --log-console-output=json", "{\"timestamp\":\"2022-02-25T10:31:32.452+01:00\",\"sequence\":8442,\"loggerClassName\":\"org.jboss.logging.Logger\",\"loggerName\":\"io.quarkus\",\"level\":\"INFO\",\"message\":\"Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.253s. Listening on: http://0.0.0.0:8080\",\"threadName\":\"main\",\"threadId\":1,\"mdc\":{},\"ndc\":\"\",\"hostName\":\"host-name\",\"processName\":\"QuarkusEntryPoint\",\"processId\":36946}", "bin/kc.[sh|bat] start --log-console-output=default", "2022-03-02 10:36:50,603 INFO [io.quarkus] (main) Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.615s. Listening on: http://0.0.0.0:8080", "bin/kc.[sh|bat] start --log-console-color=<false|true>", "bin/kc.[sh|bat] start --log-console-level=warn", "bin/kc.[sh|bat] start --log=\"console,file\"", "bin/kc.[sh|bat] start --log=\"console,file\" --log-file=<path-to>/<your-file.log>", "bin/kc.[sh|bat] start --log-file-format=\"<pattern>\"", "bin/kc.[sh|bat] start --log-file-level=warn", "bin/kc.[sh|bat] start --log=\"console,syslog\"", "bin/kc.[sh|bat] start --log=\"console,syslog\" --log-syslog-app-name=kc-p-itadmins", "bin/kc.[sh|bat] start --log=\"console,syslog\" --log-syslog-endpoint=myhost:12345", "bin/kc.[sh|bat] start --log-syslog-level=warn", "bin/kc.[sh|bat] start --log=\"console,syslog\" --log-syslog-protocol=udp", "bin/kc.[sh|bat] start --log-syslog-format=\"'<format>'\"", "bin/kc.[sh|bat] start --log-syslog-format=\"'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'\"", "bin/kc.[sh|bat] start --log-syslog-type=rfc3164", "bin/kc.[sh|bat] start --log-syslog-max-length=1536", "bin/kc.[sh|bat] start --log-syslog-output=json", "2024-04-05T12:32:20.616+02:00 host keycloak 2788276 io.quarkus - {\"timestamp\":\"2024-04-05T12:32:20.616208533+02:00\",\"sequence\":9948,\"loggerClassName\":\"org.jboss.logging.Logger\",\"loggerName\":\"io.quarkus\",\"level\":\"INFO\",\"message\":\"Profile prod activated. \",\"threadName\":\"main\",\"threadId\":1,\"mdc\":{},\"ndc\":\"\",\"hostName\":\"host\",\"processName\":\"QuarkusEntryPoint\",\"processId\":2788276}", "bin/kc.[sh|bat] start --log-syslog-output=default", "2024-04-05T12:31:38.473+02:00 host keycloak 2787568 io.quarkus - 2024-04-05 12:31:38,473 INFO [io.quarkus] (main) Profile prod activated." ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_configuration_guide/logging-
Chapter 29. Ruby (DEPRECATED)
Chapter 29. Ruby (DEPRECATED) Overview Ruby is a dynamic, open source programming language with a focus on simplicity and productivity. It has an elegant syntax that is natural to read and easy to write. The Ruby support is part of the camel-script module. Important Ruby in Apache Camel is deprecated and will be removed in a future release. Adding the script module To use Ruby in your routes you need to add a dependency on camel-script to your project as shown in Example 29.1, "Adding the camel-script dependency" . Example 29.1. Adding the camel-script dependency Static import To use the ruby() static method in your application code, include the following import statement in your Java source files: Built-in attributes Table 29.1, "Ruby attributes" lists the built-in attributes that are accessible when using Ruby. Table 29.1. Ruby attributes Attribute Type Value context org.apache.camel.CamelContext The Camel Context exchange org.apache.camel.Exchange The current Exchange request org.apache.camel.Message The IN message response org.apache.camel.Message The OUT message properties org.apache.camel.builder.script.PropertiesFunction Function with a resolve method to make it easier to use the properties component inside scripts. The attributes all set at ENGINE_SCOPE . Example Example 29.2, "Route using Ruby" shows a route that uses Ruby. Example 29.2. Route using Ruby Using the properties component To access a property value from the properties component, invoke the resolve method on the built-in properties attribute, as follows: Where PropKey is the key of the property you want to resolve, where the key value is of String type. For more details about the properties component, see Properties in the Apache Camel Component Reference Guide .
[ "<!-- Maven POM File --> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-script</artifactId> <version>USD{camel-version}</version> </dependency> </dependencies>", "import static org.apache.camel.builder.script.ScriptBuilder.*;", "<camelContext> <route> <from uri=\"direct:start\"/> <choice> <when> <langauge langauge=\"ruby\">USDrequest.headers['user'] == 'admin'</langauge> <to uri=\"seda:adminQueue\"/> </when> <otherwise> <to uri=\"seda:regularQueue\"/> </otherwise> </choice> </route> </camelContext>", ".setHeader(\"myHeader\").ruby(\"properties.resolve( PropKey )\")" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/ruby
5.3. Moving root File Systems from a Single Path Device to a Multipath Device
5.3. Moving root File Systems from a Single Path Device to a Multipath Device If you have installed your system on a single-path device and later add another path to the root file system, you will need to move your root file system to a multipathed device. This section documents the procedure for moving from a single-path to a multipathed device. After ensuring that you have installed the device-mapper-multipath package, perform the following procedure: Execute the following command to create the /etc/multipath.conf configuration file, load the multipath module, and set chkconfig for the multipathd to on : For further information on using the mpathconf command to set up multipathing, see Section 3.1, "Setting Up DM Multipath" . If the find_multipaths configuration parameter is not set to yes , edit the blacklist and blacklist_exceptions sections of the /etc/multipath.conf file, as described in Section 4.2, "Configuration File Blacklist" . In order for multipath to build a multipath device on top of the root device as soon as it is discovered, enter the following command. This command also ensures that find_multipaths will allow the device, even if it only has one path. For example, if the root device is /dev/sdb , enter the following command. To confirm that your configuration file is set up correctly, you can enter the multipath command and search the output for a line of the following format. This indicates that the command failed to create the multipath device. For example, if the WWID if the device is 3600d02300069c9ce09d41c4ac9c53200, you would see a line in the output such as the following: To rebuild the initramfs file system with multipath , execute the dracut command with the following options: Shut the machine down. Configure the FC switch so that other paths are visible to the machine. Boot the machine. Check whether the root file system ('/') is on the multipathed device.
[ "mpathconf --enable", "multipath -a root_devname", "multipath -a /dev/sdb wwid '3600d02300069c9ce09d41c4ac9c53200' added", "date wwid : ignoring map", "multipath Oct 21 09:37:19 | 3600d02300069c9ce09d41c4ac9c53200: ignoring map", "dracut --force -H --add multipath" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/move_root_to_multipath
Chapter 11. Disabling Windows container workloads
Chapter 11. Disabling Windows container workloads You can disable the capability to run Windows container workloads by uninstalling the Windows Machine Config Operator (WMCO) and deleting the namespace that was added by default when you installed the WMCO. 11.1. Uninstalling the Windows Machine Config Operator You can uninstall the Windows Machine Config Operator (WMCO) from your cluster. Prerequisites Delete the Windows Machine objects hosting your Windows workloads. Procedure From the Operators OperatorHub page, use the Filter by keyword box to search for Red Hat Windows Machine Config Operator . Click the Red Hat Windows Machine Config Operator tile. The Operator tile indicates it is installed. In the Windows Machine Config Operator descriptor page, click Uninstall . 11.2. Deleting the Windows Machine Config Operator namespace You can delete the namespace that was generated for the Windows Machine Config Operator (WMCO) by default. Prerequisites The WMCO is removed from your cluster. Procedure Remove all Windows workloads that were created in the openshift-windows-machine-config-operator namespace: USD oc delete --all pods --namespace=openshift-windows-machine-config-operator Verify that all pods in the openshift-windows-machine-config-operator namespace are deleted or are reporting a terminating state: USD oc get pods --namespace openshift-windows-machine-config-operator Delete the openshift-windows-machine-config-operator namespace: USD oc delete namespace openshift-windows-machine-config-operator Additional resources Deleting Operators from a cluster Removing Windows nodes
[ "oc delete --all pods --namespace=openshift-windows-machine-config-operator", "oc get pods --namespace openshift-windows-machine-config-operator", "oc delete namespace openshift-windows-machine-config-operator" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/windows_container_support_for_openshift/disabling-windows-container-workloads
Chapter 14. Access Control Lists
Chapter 14. Access Control Lists Files and directories have permission sets for the owner of the file, the group associated with the file, and all other users for the system. However, these permission sets have limitations. For example, different permissions cannot be configured for different users. Thus, Access Control Lists (ACLs) were implemented. The Red Hat Enterprise Linux 4 kernel provides ACL support for the ext3 file system and NFS-exported file systems. ACLs are also recognized on ext3 file systems accessed via Samba. Along with support in the kernel, the acl package is required to implement ACLs. It contains the utilities used to add, modify, remove, and retrieve ACL information. The cp and mv commands copy or move any ACLs associated with files and directories. 14.1. Mounting File Systems Before using ACLs for a file or directory, the partition for the file or directory must be mounted with ACL support. If it is a local ext3 file system, it can mounted with the following command: For example: Alternatively, if the partition is listed in the /etc/fstab file, the entry for the partition can include the acl option: If an ext3 file system is accessed via Samba and ACLs have been enabled for it, the ACLs are recognized because Samba has been compiled with the --with-acl-support option. No special flags are required when accessing or mounting a Samba share. 14.1.1. NFS By default, if the file system being exported by an NFS server supports ACLs and the NFS client can read ACLs, ACLs are utilized by the client system. To disable ACLs on NFS shares when configuring the server, include the no_acl option in the /etc/exports file. To disable ACLs on an NFS share when mounting it on a client, mount it with the no_acl option via the command line or the /etc/fstab file.
[ "mount -t ext3 -o acl <device-name> <partition>", "mount -t ext3 -o acl /dev/VolGroup00/LogVol02 /work", "LABEL=/work /work ext3 acl 1 2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Access_Control_Lists
Virtualization
Virtualization OpenShift Container Platform 4.17 OpenShift Virtualization installation, usage, and release notes Red Hat OpenShift Documentation Team
[ "oc get scc kubevirt-controller -o yaml", "oc get clusterrole kubevirt-controller -o yaml", "tar -xvf <virtctl-version-distribution.arch>.tar.gz", "chmod +x <path/virtctl-file-name>", "echo USDPATH", "export KUBECONFIG=/home/<user>/clusters/current/auth/kubeconfig", "C:\\> path", "echo USDPATH", "subscription-manager repos --enable cnv-4.17-for-rhel-8-x86_64-rpms", "yum install kubevirt-virtctl", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"add\", \"path\": \"/spec/featureGates\", \"value\": \"HotplugVolumes\"}]'", "virtctl vmexport download <vmexport_name> --vm|pvc=<object_name> --volume=<volume_name> --output=<output_file>", "virtctl guestfs -n <namespace> <pvc_name> 1", "Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)", "Memory overhead per infrastructure node ~ 150 MiB", "Memory overhead per worker node ~ 360 MiB", "Memory overhead per virtual machine ~ (1.002 x requested memory) + 218 MiB \\ 1 + 8 MiB x (number of vCPUs) \\ 2 + 16 MiB x (number of graphics devices) \\ 3 + (additional memory overhead) 4", "CPU overhead for infrastructure nodes ~ 4 cores", "CPU overhead for worker nodes ~ 2 cores + CPU overhead per virtual machine", "Aggregated storage overhead per node ~ 10 GiB", "apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.17.5 channel: \"stable\" 1", "oc apply -f <file name>.yaml", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:", "oc apply -f <file_name>.yaml", "watch oc get csv -n openshift-cnv", "NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.17.5 OpenShift Virtualization 4.17.5 Succeeded", "oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv", "oc delete subscription kubevirt-hyperconverged -n openshift-cnv", "oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "oc delete namespace openshift-cnv", "oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "customresourcedefinition.apiextensions.k8s.io \"cdis.cdi.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hostpathprovisioners.hostpathprovisioner.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hyperconvergeds.hco.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"kubevirts.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"ssps.ssp.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"tektontasks.tektontasks.kubevirt.io\" deleted (dry run)", "oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "oc edit <resource_type> <resource_name> -n {CNVNamespace}", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.17.5 channel: \"stable\" config: nodeSelector: example.io/example-infra-key: example-infra-value 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.17.5 channel: \"stable\" config: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" 1 effect: \"NoSchedule\"", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value 1 workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value 2", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value 1 workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key 2 operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: Gt values: - 8 3", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: 1 - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value 1", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3", "oc create -f storageclass_csi.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: '' # MCP #machine.openshift.io/cluster-api-machine-role: worker # machine #node-role.kubernetes.io/worker: '' # node kubeletConfig: failSwapOn: false", "oc wait mcp worker --for condition=Updated=True --timeout=-1s", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 90-worker-swap spec: config: ignition: version: 3.4.0 systemd: units: - contents: | [Unit] Description=Provision and enable swap ConditionFirstBoot=no ConditionPathExists=!/var/tmp/swapfile [Service] Type=oneshot Environment=SWAP_SIZE_MB=5000 ExecStart=/bin/sh -c \"sudo dd if=/dev/zero of=/var/tmp/swapfile count=USD{SWAP_SIZE_MB} bs=1M && sudo chmod 600 /var/tmp/swapfile && sudo mkswap /var/tmp/swapfile && sudo swapon /var/tmp/swapfile && free -h\" [Install] RequiredBy=kubelet-dependencies.target enabled: true name: swap-provision.service - contents: | [Unit] Description=Restrict swap for system slice ConditionFirstBoot=no [Service] Type=oneshot ExecStart=/bin/sh -c \"sudo systemctl set-property --runtime system.slice MemorySwapMax=0 IODeviceLatencyTargetSec=\\\"/ 50ms\\\"\" [Install] RequiredBy=kubelet-dependencies.target enabled: true name: cgroup-system-slice-config.service", "NODE_SWAP_SPACE = NODE_RAM * (MEMORY_OVER_COMMIT_PERCENT / 100% - 1)", "NODE_SWAP_SPACE = 16 GB * (150% / 100% - 1) = 16 GB * (1.5 - 1) = 16 GB * (0.5) = 8 GB", "oc adm new-project wasp", "oc create sa -n wasp wasp", "oc create clusterrolebinding wasp --clusterrole=cluster-admin --serviceaccount=wasp:wasp", "oc adm policy add-scc-to-user -n wasp privileged -z wasp", "oc wait mcp worker --for condition=Updated=True --timeout=-1s", "oc get csv -n openshift-cnv -l=operators.coreos.com/kubevirt-hyperconverged.openshift-cnv -ojson | jq '.items[0].spec.relatedImages[] | select(.name|test(\".*wasp-agent.*\")) | .image'", "kind: DaemonSet apiVersion: apps/v1 metadata: name: wasp-agent namespace: wasp labels: app: wasp tier: node spec: selector: matchLabels: name: wasp template: metadata: annotations: description: >- Configures swap for workloads labels: name: wasp spec: containers: - env: - name: SWAP_UTILIZATION_THRESHOLD_FACTOR value: \"0.8\" - name: MAX_AVERAGE_SWAP_IN_PAGES_PER_SECOND value: \"1000000000\" - name: MAX_AVERAGE_SWAP_OUT_PAGES_PER_SECOND value: \"1000000000\" - name: AVERAGE_WINDOW_SIZE_SECONDS value: \"30\" - name: VERBOSITY value: \"1\" - name: FSROOT value: /host - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName image: >- quay.io/openshift-virtualization/wasp-agent:v4.17 1 imagePullPolicy: Always name: wasp-agent resources: requests: cpu: 100m memory: 50M securityContext: privileged: true volumeMounts: - mountPath: /host name: host - mountPath: /rootfs name: rootfs hostPID: true hostUsers: true priorityClassName: system-node-critical serviceAccountName: wasp terminationGracePeriodSeconds: 5 volumes: - hostPath: path: / name: host - hostPath: path: / name: rootfs updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 10% maxSurge: 0", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: tier: node wasp.io: \"\" name: wasp-rules namespace: wasp spec: groups: - name: alerts.rules rules: - alert: NodeHighSwapActivity annotations: description: High swap activity detected at {{ USDlabels.instance }}. The rate of swap out and swap in exceeds 200 in both operations in the last minute. This could indicate memory pressure and may affect system performance. runbook_url: https://github.com/openshift-virtualization/wasp-agent/tree/main/docs/runbooks/NodeHighSwapActivity.md summary: High swap activity detected at {{ USDlabels.instance }}. expr: rate(node_vmstat_pswpout[1m]) > 200 and rate(node_vmstat_pswpin[1m]) > 200 for: 1m labels: kubernetes_operator_component: kubevirt kubernetes_operator_part_of: kubevirt operator_health_impact: warning severity: warning", "oc label namespace wasp openshift.io/cluster-monitoring=\"true\"", "oc -n openshift-cnv patch HyperConverged/kubevirt-hyperconverged --type='json' -p='[ { \"op\": \"replace\", \"path\": \"/spec/higherWorkloadDensity/memoryOvercommitPercentage\", \"value\": 150 } ]'", "hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched", "oc rollout status ds wasp-agent -n wasp", "daemon set \"wasp-agent\" successfully rolled out", "oc get nodes -l node-role.kubernetes.io/worker", "oc debug node/<selected_node> -- free -m 1", "oc -n openshift-cnv get HyperConverged/kubevirt-hyperconverged -o jsonpath='{.spec.higherWorkloadDensity}{\"\\n\"}'", "{\"memoryOvercommitPercentage\":150}", "averageSwapInPerSecond > maxAverageSwapInPagesPerSecond && averageSwapOutPerSecond > maxAverageSwapOutPagesPerSecond", "nodeWorkingSet + nodeSwapUsage < totalNodeMemory + totalSwapMemory x thresholdFactor", "oc get csv -n openshift-cnv", "VERSION REPLACES PHASE 4.9.0 kubevirt-hyperconverged-operator.v4.8.2 Installing 4.9.0 kubevirt-hyperconverged-operator.v4.9.0 Replacing", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'", "ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: workloadUpdateStrategy: workloadUpdateMethods: 1 - LiveMigrate 2 - Evict 3 batchEvictionSize: 10 4 batchEvictionInterval: \"1m0s\" 5", "oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces", "oc get kv kubevirt-kubevirt-hyperconverged -o json -n openshift-cnv | jq .status.outdatedVirtualMachineInstanceWorkloads", "oc get vmi -l kubevirt.io/outdatedLauncherImage --all-namespaces", "oc get kv kubevirt-kubevirt-hyperconverged -n openshift-cnv -o jsonpath='{.spec.workloadUpdateStrategy.workloadUpdateMethods}'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":[]}]'", "hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.conditions\"", "[ { \"lastTransitionTime\": \"2022-12-09T16:29:11Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"ReconcileComplete\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"Available\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"False\", \"type\": \"Progressing\" }, { \"lastTransitionTime\": \"2022-12-09T16:39:11Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"False\", \"type\": \"Degraded\" }, { \"lastTransitionTime\": \"2022-12-09T20:30:10Z\", \"message\": \"Reconcile completed successfully\", \"observedGeneration\": 3, \"reason\": \"ReconcileCompleted\", \"status\": \"True\", \"type\": \"Upgradeable\" 1 } ]", "oc adm upgrade", "oc get clusterversion", "oc get csv -n openshift-cnv", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.versions\"", "[ { \"name\": \"operator\", \"version\": \"4.17.5\" } ]", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o json | jq \".status.conditions\"", "oc get clusterversion", "oc get csv -n openshift-cnv", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \"[{\\\"op\\\":\\\"add\\\",\\\"path\\\":\\\"/spec/workloadUpdateStrategy/workloadUpdateMethods\\\", \\\"value\\\":{WorkloadUpdateMethodConfig}}]\"", "hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched", "oc get vmim -A", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: commonBootImageNamespace: <custom_namespace> 1", "apiVersion: instancetype.kubevirt.io/v1beta1 kind: VirtualMachineInstancetype metadata: name: example-instancetype spec: cpu: guest: 1 1 memory: guest: 128Mi 2", "virtctl create instancetype --cpu 2 --memory 256Mi", "virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f -", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-9-minimal spec: dataVolumeTemplates: - metadata: name: rhel-9-minimal-volume spec: sourceRef: kind: DataSource name: rhel9 1 namespace: openshift-virtualization-os-images 2 storage: {} instancetype: name: u1.medium 3 preference: name: rhel.9 4 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: rhel-9-minimal-volume name: rootdisk", "oc create -f <vm_manifest_file>.yaml", "virtctl start <vm_name> -n <namespace>", "cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF", "podman build -t <registry>/<container_disk_name>:latest .", "podman push <registry>/<container_disk_name>:latest", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - \"private-registry-example-1:5000\" - \"private-registry-example-2:5000\"", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null name: vm-rhel-datavolume 1 labels: kubevirt.io/vm: vm-rhel-datavolume spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: rhel-dv 2 spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 10Gi 3 instancetype: name: u1.small 4 preference: inferFromVolume: datavolumedisk1 runStrategy: Always template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-rhel-datavolume spec: domain: devices: {} resources: {} terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: rhel-dv name: datavolumedisk1 status: {}", "oc create -f vm-rhel-datavolume.yaml", "oc get pods", "oc describe dv rhel-dv 1", "virtctl console vm-rhel-datavolume", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null name: vm-rhel-datavolume 1 labels: kubevirt.io/vm: vm-rhel-datavolume spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: rhel-dv 2 spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 10Gi 3 instancetype: name: u1.small 4 preference: inferFromVolume: datavolumedisk1 runStrategy: Always template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-rhel-datavolume spec: domain: devices: {} resources: {} terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: rhel-dv name: datavolumedisk1 status: {}", "oc create -f vm-rhel-datavolume.yaml", "oc get pods", "oc describe dv rhel-dv 1", "virtctl console vm-rhel-datavolume", "virtctl stop <my_vm_name>", "oc get vm <my_vm_name> -o jsonpath=\"{.spec.template.spec.volumes}{'\\n'}\"", "[{\"dataVolume\":{\"name\":\"<my_vm_volume>\"},\"name\":\"rootdisk\"},{\"cloudInitNoCloud\":{...}]", "oc get pvc", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE <my_vm_volume> Bound ...", "virtctl guestfs <my-vm-volume> --uid 107", "virt-sysprep -a disk.img", "%WINDIR%\\System32\\Sysprep\\sysprep.exe /generalize /shutdown /oobe /mode:vm", "virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3", "oc get dvs", "yum install -y qemu-guest-agent", "systemctl enable --now qemu-guest-agent", "oc get vm <vm_name>", "net start", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "virtctl start <vm> -n <namespace>", "oc apply -f <vm.yaml>", "apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/cloneFallbackReason: The volume modes of source and target are incompatible cdi.kubevirt.io/clonePhase: Succeeded cdi.kubevirt.io/cloneType: copy", "NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE test-ns 0s Warning IncompatibleVolumeModes persistentvolumeclaim/test-target The volume modes of source and target are incompatible", "kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com", "kind: StorageClass apiVersion: storage.k8s.io/v1 provisioner: openshift-storage.rbd.csi.ceph.com", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: namespace: \"<source_namespace>\" 2 name: \"<my_vm_disk>\" 3 storage: {}", "oc create -f <datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace> 2 name: \"<source_pvc>\" 3", "oc create -f <vm-clone-datavolumetemplate>.yaml", "virtctl vnc <vm_name>", "virtctl vnc <vm_name> -v 4", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/deployVmConsoleProxy\", \"value\": true}]'", "curl --header \"Authorization: Bearer USD{TOKEN}\" \"https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>\"", "{ \"token\": \"eyJhb...\" }", "export VNC_TOKEN=\"<token>\"", "oc login --token USD{VNC_TOKEN}", "virtctl vnc <vm_name> -n <namespace>", "virtctl delete serviceaccount --namespace \"<namespace>\" \"<vm_name>-vnc-access\"", "kubectl create rolebinding \"USD{ROLE_BINDING_NAME}\" --clusterrole=\"token.kubevirt.io:generate\" --user=\"USD{USER_NAME}\"", "kubectl create rolebinding \"USD{ROLE_BINDING_NAME}\" --clusterrole=\"token.kubevirt.io:generate\" --serviceaccount=\"USD{SERVICE_ACCOUNT_NAME}\"", "virtctl console <vm_name>", "virtctl create vm --instancetype <my_instancetype> --preference <my_preference>", "virtctl create vm --instancetype virtualmachineinstancetype/<my_instancetype> --preference virtualmachinepreference/<my_preference>", "virtctl create vm --volume-import type:pvc,src:my-ns/my-pvc --infer-instancetype --infer-preference", "oc label DataSource foo instancetype.kubevirt.io/default-instancetype=<my_instancetype>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config user: cloud-user name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys", "virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1", "virtctl -n my-namespace ssh cloud-user@example-vm -i my-key", "Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p", "ssh <user>@vm/<vm_name>.<namespace>", "virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1", "virtctl expose vm example-vm --name example-service --type NodePort --port 22", "oc get service", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000", "oc create -f example-service.yaml", "oc get service -n example-namespace", "ssh <user_name>@<ip_address> -p <port> 1", "oc describe vm <vm_name> -n <namespace>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default", "ssh <user_name>@<ip_address> -i <ssh_key>", "ssh [email protected] -i ~/.ssh/id_rsa_cloud-user", "oc edit vm <vm_name>", "oc apply vm <vm_name> -n <namespace>", "oc edit vm <vm_name> -n <namespace>", "disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default", "oc delete vm <vm_name>", "apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3", "oc create -f example-export.yaml", "oc get vmexport example-export -o yaml", "apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: \"\" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:10:09Z\" reason: podReady status: \"True\" type: Ready - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:09:02Z\" reason: pvcBound status: \"True\" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export", "oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1", "oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1", "oc get vmexport <export_name> -o yaml", "apiVersion: export.kubevirt.io/v1beta1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: # links: external: # manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: # manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export", "curl --cacert cacert.crt <secret_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "curl --cacert cacert.crt <all_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1beta1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "oc get vmis -A", "oc delete vmi <vmi_name>", "kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name>", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2", "apiVersion: tekton.dev/v1 kind: PipelineRun metadata: generateName: windows11-installer-run- labels: pipelinerun: windows11-installer-run spec: params: - name: winImageDownloadURL value: <windows_image_download_url> 1 - name: acceptEula value: false 2 pipelineRef: params: - name: catalog value: redhat-pipelines - name: type value: artifact - name: kind value: pipeline - name: name value: windows-efi-installer - name: version value: 4.17 resolver: hub taskRunSpecs: - pipelineTaskName: modify-windows-iso-file PodTemplate: securityContext: fsGroup: 107 runAsUser: 107", "oc apply -f windows11-customize-run.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: resources: requests: memory: 128Mi limits: memory: 256Mi 1", "apiVersion: aaq.kubevirt.io/v1alpha1 kind: ApplicationAwareResourceQuota metadata: name: example-resource-quota spec: hard: requests.memory: 1Gi limits.memory: 1Gi requests.cpu/vmi: \"1\" 1 requests.memory/vmi: 1Gi 2", "apiVersion: aaq.kubevirt.io/v1alpha1 kind: ApplicationAwareClusterResourceQuota 1 metadata: name: example-resource-quota spec: quota: hard: requests.memory: 1Gi limits.memory: 1Gi requests.cpu/vmi: \"1\" requests.memory/vmi: 1Gi selector: annotations: null labels: matchLabels: kubernetes.io/metadata.name: default", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"add\", \"path\": \"/spec/featureGates/enableApplicationAwareQuota\", \"value\": true}]'", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type merge -p '{ \"spec\": { \"applicationAwareConfig\": { \"vmiCalcConfigName\": \"DedicatedVirtualResources\", \"namespaceSelector\": { \"matchLabels\": { \"app\": \"my-app\" } }, \"allowApplicationAwareClusterResourceQuota\": true } } }'", "metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2", "metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname", "metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value", "metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: {}", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: matchLabels: <first_example_key>: \"true\" <second_example_key>: \"true\"", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration:", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3", "certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s", "error: hyperconvergeds.hco.kubevirt.io \"kubevirt-hyperconverged\" could not be patched: admission webhook \"validate-hco.kubevirt.io\" denied the request: spec.certConfig: ca.duration is smaller than server.duration", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: \"EPYC\"", "apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2", "oc create -f <file_name>.yaml", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/featureGates/VMPersistentState\", \"value\": true}]'", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm spec: template: spec: domain: firmware: bootloader: efi: persistent: true", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", 2 \"type\": \"bridge\", 3 \"bridge\": \"bridge-interface\", 4 \"macspoofchk\": false, 5 \"vlan\": 100, 6 \"disableContainerInterface\": true, \"preserveDefaultVlan\": false 7 }", "oc create -f pxe-net-conf.yaml", "interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1", "devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2", "networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf", "oc create -f vmi-pxe-boot.yaml", "virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created", "oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running", "virtctl vnc vmi-pxe-boot", "virtctl console vmi-pxe-boot", "ip addr", "3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff", "kind: VirtualMachine spec: domain: resources: requests: memory: \"4Gi\" 1 memory: hugepages: pageSize: \"1Gi\" 2", "oc apply -f <virtual_machine>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1", "apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: running: true template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio", "oc get pods", "NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m", "oc describe pod virt-launcher-vm-fedora-dpc87", "[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...]", "oc label node <node_name> nvidia.com/gpu.deploy.operands=false 1", "oc describe node <node_name>", "oc get pods -n nvidia-gpu-operator", "NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-sandbox-validator-kxwj7 1/1 Terminating 0 9d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d nvidia-vfio-manager-zqtck 1/1 Terminating 0 9d", "oc get pods -n nvidia-gpu-operator", "NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "lspci -nnv | grep -i nvidia", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "variant: openshift version: 4.17.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci", "butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml", "oc apply -f 100-worker-vfiopci.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s", "lspci -nnk -d 10de:", "04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: \"10DE:1DB6\" 3 resourceName: \"nvidia.com/GV100GL_Tesla_V100\" 4 - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\" - pciDeviceSelector: \"8086:6F54\" resourceName: \"intel.com/qat\" externalResourceProvider: true 5", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: \"10DE:1DB6\" resourceName: \"nvidia.com/GV100GL_Tesla_V100\" - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\"", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1", "lspci -nnk | grep NVIDIA", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "kind: ClusterPolicy apiVersion: nvidia.com/v1 metadata: name: gpu-cluster-policy spec: operator: defaultRuntime: crio use_ocp_driver_toolkit: true initContainer: {} sandboxWorkloads: enabled: true defaultWorkload: vm-vgpu driver: enabled: false 1 dcgmExporter: {} dcgm: enabled: true daemonsets: {} devicePlugin: {} gfd: {} migManager: enabled: true nodeStatusExporter: enabled: true mig: strategy: single toolkit: enabled: true validator: plugin: env: - name: WITH_WORKLOAD value: \"true\" vgpuManager: enabled: true 2 repository: <vgpu_container_registry> 3 image: <vgpu_image_name> version: nvidia-vgpu-manager vgpuDeviceManager: enabled: false 4 config: name: vgpu-devices-config default: default sandboxDevicePlugin: enabled: false 5 vfioManager: enabled: false 6", "mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108", "nvidia-105 nvidia-108 nvidia-217 nvidia-299", "mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-22 - nvidia-223 - nvidia-224", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-231 nodeMediatedDeviceTypes: - mediatedDeviceTypes: - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q", "spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDeviceTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value>", "oc get USDNODE -o json | jq '.status.allocatable | with_entries(select(.key | startswith(\"nvidia.com/\"))) | with_entries(select(.value != \"0\"))'", "permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2", "oc describe node <node_name>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-2Q name: gpu2", "lspci -nnk | grep <device_name>", "lsusb", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: {CNVNamespace} spec: configuration: permittedHostDevices: 1 usbHostDevices: 2 - resourceName: kubevirt.io/peripherals 3 selectors: - vendor: \"045e\" product: \"07a5\" - vendor: \"062a\" product: \"4102\" - vendor: \"072f\" product: \"b100\"", "oc /dev/serial/by-id/usb-VENDOR_device_name", "oc edit vmi vmi-usb", "apiVersion: kubevirt.io/v1 kind: VirtualMachineInstance metadata: labels: special: vmi-usb name: vmi-usb 1 spec: domain: devices: hostDevices: - deviceName: kubevirt.io/peripherals name: local-peripherals", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: \"true\"", "apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - LongLifecycle 1 mode: Predictive 2 profileCustomizations: devEnableEvictionsInBackground: true 3", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/tuningPolicy\", \"value\": \"highBurst\"}]'", "oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged -n openshift-cnv -o go-template --template='{{range USDconfig, USDvalue := .spec.configuration}} {{if eq USDconfig \"apiConfiguration\" \"webhookConfiguration\" \"controllerConfiguration\" \"handlerConfiguration\"}} {{\"\\n\"}} {{USDconfig}} = {{USDvalue}} {{end}} {{end}} {{\"\\n\"}}", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "spec: resourceRequirements: vmiCPUAllocationRatio: 1 1", "apiVersion: kubevirt.io/v1 kind: VM spec: domain: devices: networkInterfaceMultiqueue: true", "virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> [--persist] [--serial=<label-name>]", "virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>", "oc edit pvc <pvc_name>", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: \"<storage_class>\" 2", "oc create -f <blank-image-datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: domain: devices: disks: - disk: bus: virtio name: rootdisk errorPolicy: report 1 disk1: disk_one 2 - disk: bus: virtio name: cloudinitdisk disk2: disk_two shareable: true 3 interfaces: - masquerade: {} name: default", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report 1 lun: 2 bus: scsi reservation: true 3 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report lun: 1 bus: scsi reservation: true 2 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/featureGates/persistentReservation\", \"value\": true}]'", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: 2 - port: 80 networks: - name: default pod: {}", "oc create -f <vm-name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm-ipv6 spec: template: spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {} volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth0: dhcp4: true addresses: [ fd10:0:2::2/120 ] 3 gateway6: fd10:0:2::1 4", "oc create -f example-vm-ipv6.yaml", "oc get vmi <vmi-name> -o jsonpath=\"{.status.interfaces[*].ipAddresses}\"", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000", "oc create -f example-service.yaml", "oc get service -n example-namespace", "apiVersion: v1 kind: Service metadata: name: mysubdomain 1 spec: selector: expose: me 2 clusterIP: None 3 ports: 4 - protocol: TCP port: 1234 targetPort: 1234", "oc create -f headless_service.yaml", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: template: metadata: labels: expose: me 1 spec: hostname: \"myvm\" 2 subdomain: \"mysubdomain\" 3", "virtctl console vm-fedora", "ping myvm.mysubdomain.<namespace>.svc.cluster.local", "PING myvm.mysubdomain.default.svc.cluster.local (10.244.0.57) 56(84) bytes of data. 64 bytes from myvm.mysubdomain.default.svc.cluster.local (10.244.0.57): icmp_seq=1 ttl=64 time=0.029 ms", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bridge-network 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/br1 2 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"bridge-network\", 3 \"type\": \"bridge\", 4 \"bridge\": \"br1\", 5 \"macspoofchk\": false, 6 \"vlan\": 100, 7 \"disableContainerInterface\": true, \"preserveDefaultVlan\": false 8 }", "oc create -f network-attachment-definition.yaml 1", "oc get network-attachment-definition bridge-network", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: interfaces: - bridge: {} name: bridge-net 1 networks: - name: bridge-net 2 multus: networkName: a-bridge-network 3", "oc apply -f example-vm.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: \"<trust_vf>\" 11 capabilities: <capabilities> 12", "oc create -f <name>-sriov-network.yaml", "oc get net-attach-def -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: domain: devices: interfaces: - name: nic1 1 sriov: {} networks: - name: nic1 2 multus: networkName: sriov-network 3", "oc apply -f <vm_sriov>.yaml 1", "oc label node <node_name> node-role.kubernetes.io/worker-dpdk=\"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-dpdk labels: machineconfiguration.openshift.io/role: worker-dpdk spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-dpdk nodeSelector: matchLabels: node-role.kubernetes.io/worker-dpdk: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: profile-1 spec: cpu: isolated: 4-39,44-79 reserved: 0-3,40-43 globallyDisableIrqLoadBalancing: true hugepages: defaultHugepagesSize: 1G pages: - count: 8 node: 0 size: 1G net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-dpdk: \"\" numa: topologyPolicy: single-numa-node", "oc get performanceprofiles.performance.openshift.io profile-1 -o=jsonpath='{.status.runtimeClass}{\"\\n\"}'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/defaultRuntimeClass\", \"value\":\"<runtimeclass-name>\"}]'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/featureGates/alignCPUs\", \"value\": true}]'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-1 namespace: openshift-sriov-network-operator spec: resourceName: intel_nics_dpdk deviceType: vfio-pci mtu: 9000 numVfs: 4 priority: 99 nicSelector: vendor: \"8086\" deviceID: \"1572\" pfNames: - eno3 rootDevices: - \"0000:19:00.2\" nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"", "oc label node <node_name> node-role.kubernetes.io/worker-dpdk-", "oc delete mcp worker-dpdk", "oc create ns dpdk-checkup-ns", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: dpdk-sriovnetwork namespace: openshift-sriov-network-operator spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } networkNamespace: dpdk-checkup-ns 1 resourceName: intel_nics_dpdk 2 spoofChk: \"off\" trust: \"on\" vlan: 1019", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-dpdk-vm spec: running: true template: metadata: annotations: cpu-load-balancing.crio.io: disable 1 cpu-quota.crio.io: disable 2 irq-load-balancing.crio.io: disable 3 spec: domain: cpu: sockets: 1 4 cores: 5 5 threads: 2 dedicatedCpuPlacement: true isolateEmulatorThread: true interfaces: - masquerade: {} name: default - model: virtio name: nic-east pciAddress: '0000:07:00.0' sriov: {} networkInterfaceMultiqueue: true rng: {} memory: hugepages: pageSize: 1Gi 6 guest: 8Gi networks: - name: default pod: {} - multus: networkName: dpdk-net 7 name: nic-east", "oc apply -f <file_name>.yaml", "grubby --update-kernel=ALL --args=\"default_hugepagesz=1GB hugepagesz=1G hugepages=8\"", "dnf install -y tuned-profiles-cpu-partitioning", "echo isolated_cores=2-9 > /etc/tuned/cpu-partitioning-variables.conf", "tuned-adm profile cpu-partitioning", "dnf install -y driverctl", "driverctl set-override 0000:07:00.0 vfio-pci", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network namespace: my-namespace spec: config: |- { \"cniVersion\": \"0.3.1\", 1 \"name\": \"my-namespace-l2-network\", 2 \"type\": \"ovn-k8s-cni-overlay\", 3 \"topology\":\"layer2\", 4 \"mtu\": 1300, 5 \"netAttachDefName\": \"my-namespace/l2-network\" 6 }", "oc apply -f <filename>.yaml", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet-network 3 bridge: br-ex 4 state: present 5", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: default spec: config: |- { \"cniVersion\": \"0.3.1\", 1 \"name\": \"localnet-network\", 2 \"type\": \"ovn-k8s-cni-overlay\", 3 \"topology\": \"localnet\", 4 \"netAttachDefName\": \"default/localnet-network\" 5 }", "oc apply -f <filename>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-server spec: running: true template: spec: domain: devices: interfaces: - name: secondary 1 bridge: {} resources: requests: memory: 1024Mi networks: - name: secondary 2 multus: networkName: <nad_name> 3 nodeSelector: node-role.kubernetes.io/worker: '' 4", "oc apply -f <filename>.yaml", "virtctl start <vm_name> -n <namespace>", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # new interface - name: <secondary_nic> 1 bridge: {} networks: - name: defaultnetwork pod: {} # new network - name: <secondary_nic> 2 multus: networkName: <nad_name> 3", "virtctl migrate <vm_name>", "oc get VirtualMachineInstanceMigration -w", "NAME PHASE VMI kubevirt-migrate-vm-lj62q Scheduling vm-fedora kubevirt-migrate-vm-lj62q Scheduled vm-fedora kubevirt-migrate-vm-lj62q PreparingTarget vm-fedora kubevirt-migrate-vm-lj62q TargetReady vm-fedora kubevirt-migrate-vm-lj62q Running vm-fedora kubevirt-migrate-vm-lj62q Succeeded vm-fedora", "oc get vmi vm-fedora -ojsonpath=\"{ @.status.interfaces }\"", "[ { \"infoSource\": \"domain, guest-agent\", \"interfaceName\": \"eth0\", \"ipAddress\": \"10.130.0.195\", \"ipAddresses\": [ \"10.130.0.195\", \"fd02:0:0:3::43c\" ], \"mac\": \"52:54:00:0e:ab:25\", \"name\": \"default\", \"queueCount\": 1 }, { \"infoSource\": \"domain, guest-agent, multus-status\", \"interfaceName\": \"eth1\", \"mac\": \"02:d8:b8:00:00:2a\", \"name\": \"bridge-interface\", 1 \"queueCount\": 1 } ]", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora template: spec: domain: devices: interfaces: - name: defaultnetwork masquerade: {} # set the interface state to absent - name: <secondary_nic> state: absent 1 bridge: {} networks: - name: defaultnetwork pod: {} - name: <secondary_nic> multus: networkName: <nad_name>", "virtctl migrate <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-istio name: vm-istio spec: runStrategy: Always template: metadata: labels: kubevirt.io/vm: vm-istio app: vm-istio 1 annotations: sidecar.istio.io/inject: \"true\" 2 spec: domain: devices: interfaces: - name: default masquerade: {} 3 disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - containerDisk: image: registry:5000/kubevirt/fedora-cloud-container-disk-demo:devel name: containerdisk", "oc apply -f <vm_name>.yaml 1", "apiVersion: v1 kind: Service metadata: name: vm-istio spec: selector: app: vm-istio 1 ports: - port: 8080 name: http protocol: TCP", "oc create -f <service_name>.yaml 1", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: my-secondary-network 1 namespace: openshift-cnv spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"migration-bridge\", \"type\": \"macvlan\", \"master\": \"eth1\", 2 \"mode\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", 3 \"range\": \"10.200.5.0/24\" 4 } }'", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: completionTimeoutPerGiB: 800 network: <network> 1 parallelMigrationsPerCluster: 5 parallelOutboundMigrationsPerNode: 2 progressTimeout: 150", "oc get vmi <vmi_name> -o jsonpath='{.status.migrationState.targetNodeAddress}'", "kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true", "kind: VirtualMachine spec: template: # spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2", "oc describe vmi <vmi_name>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true 1", "oc expose -n openshift-cnv deployment/secondary-dns --name=dns-lb --type=LoadBalancer --port=53 --target-port=5353 --protocol='UDP'", "oc get service -n openshift-cnv", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dns-lb LoadBalancer 172.30.27.5 10.46.41.94 53:31829/TCP 5s", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: deployKubeSecondaryDNS: true kubeSecondaryDNSNameServerIP: \"10.46.41.94\" 1", "oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}'", "openshift.example.com", "vm.<FQDN>. IN NS ns.vm.<FQDN>.", "ns.vm.<FQDN>. IN A <kubeSecondaryDNSNameServerIP>", "oc get dnses.config.openshift.io cluster -o json | jq .spec.baseDomain", "oc get vm -n <namespace> <vm_name> -o yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: true template: spec: domain: devices: interfaces: - bridge: {} name: example-nic networks: - multus: networkName: bridge-conf name: example-nic 1", "ssh <user_name>@<interface_name>.<vm_name>.<namespace>.vm.<cluster_fqdn>", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=ignore", "oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-", "oc edit storageprofile <storage_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class>", "oc get storageprofile", "oc describe storageprofile <name>", "Name: ocs-storagecluster-ceph-rbd-virtualization Namespace: Labels: app=containerized-data-importer app.kubernetes.io/component=storage app.kubernetes.io/managed-by=cdi-controller app.kubernetes.io/part-of=hyperconverged-cluster app.kubernetes.io/version=4.17.2 cdi.kubevirt.io= Annotations: <none> API Version: cdi.kubevirt.io/v1beta1 Kind: StorageProfile Metadata: Creation Timestamp: 2023-11-13T07:58:02Z Generation: 2 Owner References: API Version: cdi.kubevirt.io/v1beta1 Block Owner Deletion: true Controller: true Kind: CDI Name: cdi-kubevirt-hyperconverged UID: 2d6f169a-382c-4caf-b614-a640f2ef8abb Resource Version: 4186799537 UID: 14aef804-6688-4f2e-986b-0297fd3aaa68 Spec: Status: Claim Property Sets: 1 accessModes: ReadWriteMany volumeMode: Block accessModes: ReadWriteOnce volumeMode: Block accessModes: ReadWriteOnce volumeMode: Filesystem Clone Strategy: csi-clone 2 Data Import Cron Source Format: snapshot 3 Provisioner: openshift-storage.rbd.csi.ceph.com Snapshot Class: ocs-storagecluster-rbdplugin-snapclass Storage Class: ocs-storagecluster-ceph-rbd-virtualization Events: <none>", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": false}]'", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": true}]'", "oc get sc -o json| jq '.items[].metadata|select(.annotations.\"storageclass.kubevirt.io/is-default-virt-class\"==\"true\")|.name'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubevirt.io/is-default-virt-class\": \"false\"}}}'", "oc get sc -o json| jq '.items[].metadata|select(.annotations.\"storageclass.kubernetes.io/is-default-class\"==\"true\")|.name'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubevirt.io/is-default-virt-class\": \"true\"}}}'", "oc patch storageclass <storage_class_name> -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc delete DataVolume,VolumeSnapshot -n openshift-virtualization-os-images --selector=cdi.kubevirt.io/dataImportCron", "oc get storageprofile <storage_class_name> -o json | jq .status.dataImportCronSourceFormat", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel9-image-cron spec: template: spec: storage: storageClassName: <storage_class> 1 schedule: \"0 */12 * * *\" 2 managedDataSource: <data_source> 3", "oc delete DataVolume,VolumeSnapshot -n openshift-virtualization-os-images --selector=cdi.kubevirt.io/dataImportCron", "oc get storageprofile <storage_class_name> -o json | jq .status.dataImportCronSourceFormat", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos-stream9-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" 1 spec: schedule: \"0 */12 * * *\" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi garbageCollect: Outdated managedDataSource: centos-stream9 4", "oc edit storageprofile <storage_class>", "apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: spec: dataImportCronSourceFormat: snapshot", "oc get storageprofile <storage_class> -oyaml", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: 'false' name: rhel8-image-cron", "oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yaml", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: status: dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: centos-9-image-cron spec: garbageCollect: Outdated managedDataSource: centos-stream9 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true 1 - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream9 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:9 storage: resources: requests: storage: 30Gi status: {} status: {} 2", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "spec: filesystemOverhead: global: \"<new_global_value>\" 1 storageClass: <storage_class_name>: \"<new_value_for_this_storage_class>\" 2", "oc get cdiconfig -o yaml", "oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: \"/var/myvolumes\" 2 workload: nodeSelector: kubernetes.io/os: linux", "oc create -f hpp_cr.yaml", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3", "oc create -f storageclass_csi.yaml", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iso-pvc spec: volumeMode: Block 1 storageClassName: my-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 5Gi", "apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: \"/var/myvolumes\" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux", "oc create -f hpp_pvc_template_pool.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: [\"cdi.kubevirt.io\"] resources: [\"datavolumes/source\"] verbs: [\"*\"]", "oc create -f <datavolume-cloner.yaml> 1", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io", "oc create -f <datavolume-cloner.yaml> 1", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: \"500m\" memory: \"2Gi\" requests: cpu: \"250m\" memory: \"1Gi\"", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: \"<storage_class>\" 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 registry: url: <image_url> 2 storage: resources: requests: storage: 1Gi preallocation: true", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: datavolume-example annotations: v1.multus-cni.io/default-network: bridge-network 1", "Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 64Mi 1 completionTimeoutPerGiB: 800 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 2 4 progressTimeout: 150 5 allowPostCopy: false 6", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: liveMigrationConfig: bandwidthPerMigration: 0Mi 1 completionTimeoutPerGiB: 150 2 parallelMigrationsPerCluster: 5 3 parallelOutboundMigrationsPerNode: 1 4 progressTimeout: 150 5 allowPostCopy: true 6", "oc edit vm <vm_name>", "apiVersion: migrations.kubevirt.io/v1alpha1 kind: VirtualMachine metadata: name: <vm_name> namespace: default labels: app: my-app environment: production spec: template: metadata: labels: kubevirt.io/domain: <vm_name> kubevirt.io/size: large kubevirt.io/environment: production", "apiVersion: migrations.kubevirt.io/v1alpha1 kind: MigrationPolicy metadata: name: <migration_policy> spec: selectors: namespaceSelector: 1 hpc-workloads: \"True\" xyz-workloads-type: \"\" virtualMachineInstanceSelector: 2 kubevirt.io/environment: \"production\"", "oc create -f <migration_policy>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachineInstanceMigration metadata: name: <migration_name> spec: vmiName: <vm_name>", "oc create -f <migration_name>.yaml", "oc describe vmi <vm_name> -n <namespace>", "Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true", "oc delete vmim migration-job", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: evictionStrategy: LiveMigrateIfPossible 1", "virtctl restart <vm_name> -n <namespace>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: evictionStrategy: LiveMigrate", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: runStrategy: Always", "\"486\" Conroe athlon core2duo coreduo kvm32 kvm64 n270 pentium pentium2 pentium3 pentiumpro phenom qemu32 qemu64", "apic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tsc", "aes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsave", "aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - \"<obsolete_cpu_1>\" - \"<obsolete_cpu_2>\" minCPUModel: \"<minimum_cpu_model>\" 2", "oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1", "oc adm cordon <node_name>", "oc adm drain <node_name> --force=true", "oc delete node <node_name>", "oc get vmis -A", "--- apiVersion: v1 kind: ServiceAccount metadata: name: vm-latency-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-vm-latency-checker rules: - apiGroups: [\"kubevirt.io\"] resources: [\"virtualmachineinstances\"] verbs: [\"get\", \"create\", \"delete\"] - apiGroups: [\"subresources.kubevirt.io\"] resources: [\"virtualmachineinstances/console\"] verbs: [\"get\"] - apiGroups: [\"k8s.cni.cncf.io\"] resources: [\"network-attachment-definitions\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-vm-latency-checker subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kubevirt-vm-latency-checker apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [\"get\", \"update\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: vm-latency-checkup-sa roleRef: kind: Role name: kiagnose-configmap-access apiGroup: rbac.authorization.k8s.io", "oc apply -n <target_namespace> -f <latency_sa_roles_rolebinding>.yaml 1", "apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config labels: kiagnose/checkup-type: kubevirt-vm-latency data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: \"blue-network\" 1 spec.param.maxDesiredLatencyMilliseconds: \"10\" 2 spec.param.sampleDurationSeconds: \"5\" 3 spec.param.sourceNode: \"worker1\" 4 spec.param.targetNode: \"worker2\" 5", "oc apply -n <target_namespace> -f <latency_config_map>.yaml", "apiVersion: batch/v1 kind: Job metadata: name: kubevirt-vm-latency-checkup labels: kiagnose/checkup-type: kubevirt-vm-latency spec: backoffLimit: 0 template: spec: serviceAccountName: vm-latency-checkup-sa restartPolicy: Never containers: - name: vm-latency-checkup image: registry.redhat.io/container-native-virtualization/vm-network-latency-checkup-rhel9:v4.17.0 securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] runAsNonRoot: true seccompProfile: type: \"RuntimeDefault\" env: - name: CONFIGMAP_NAMESPACE value: <target_namespace> - name: CONFIGMAP_NAME value: kubevirt-vm-latency-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid", "oc apply -n <target_namespace> -f <latency_job>.yaml", "oc wait job kubevirt-vm-latency-checkup -n <target_namespace> --for condition=complete --timeout 6m", "oc get configmap kubevirt-vm-latency-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: kubevirt-vm-latency-checkup-config namespace: <target_namespace> labels: kiagnose/checkup-type: kubevirt-vm-latency data: spec.timeout: 5m spec.param.networkAttachmentDefinitionNamespace: <target_namespace> spec.param.networkAttachmentDefinitionName: \"blue-network\" spec.param.maxDesiredLatencyMilliseconds: \"10\" spec.param.sampleDurationSeconds: \"5\" spec.param.sourceNode: \"worker1\" spec.param.targetNode: \"worker2\" status.succeeded: \"true\" status.failureReason: \"\" status.completionTimestamp: \"2022-01-01T09:00:00Z\" status.startTimestamp: \"2022-01-01T09:00:07Z\" status.result.avgLatencyNanoSec: \"177000\" status.result.maxLatencyNanoSec: \"244000\" 1 status.result.measurementDurationSec: \"5\" status.result.minLatencyNanoSec: \"135000\" status.result.sourceNode: \"worker1\" status.result.targetNode: \"worker2\"", "oc logs job.batch/kubevirt-vm-latency-checkup -n <target_namespace>", "oc delete job -n <target_namespace> kubevirt-vm-latency-checkup", "oc delete config-map -n <target_namespace> kubevirt-vm-latency-checkup-config", "oc delete -f <latency_sa_roles_rolebinding>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubevirt-storage-checkup-clustereader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-reader subjects: - kind: ServiceAccount name: storage-checkup-sa namespace: <target_namespace> 1", "--- apiVersion: v1 kind: ServiceAccount metadata: name: storage-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: storage-checkup-role rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [\"get\", \"update\"] - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachines\" ] verbs: [ \"create\", \"delete\" ] - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachineinstances\" ] verbs: [ \"get\" ] - apiGroups: [ \"subresources.kubevirt.io\" ] resources: [ \"virtualmachineinstances/addvolume\", \"virtualmachineinstances/removevolume\" ] verbs: [ \"update\" ] - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachineinstancemigrations\" ] verbs: [ \"create\" ] - apiGroups: [ \"cdi.kubevirt.io\" ] resources: [ \"datavolumes\" ] verbs: [ \"create\", \"delete\" ] - apiGroups: [ \"\" ] resources: [ \"persistentvolumeclaims\" ] verbs: [ \"delete\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: storage-checkup-role subjects: - kind: ServiceAccount name: storage-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: storage-checkup-role", "oc apply -n <target_namespace> -f <storage_sa_roles_rolebinding>.yaml", "--- apiVersion: v1 kind: ConfigMap metadata: name: storage-checkup-config namespace: USDCHECKUP_NAMESPACE data: spec.timeout: 10m spec.param.storageClass: ocs-storagecluster-ceph-rbd-virtualization spec.param.vmiTimeout: 3m --- apiVersion: batch/v1 kind: Job metadata: name: storage-checkup namespace: USDCHECKUP_NAMESPACE spec: backoffLimit: 0 template: spec: serviceAccount: storage-checkup-sa restartPolicy: Never containers: - name: storage-checkup image: quay.io/kiagnose/kubevirt-storage-checkup:main imagePullPolicy: Always env: - name: CONFIGMAP_NAMESPACE value: USDCHECKUP_NAMESPACE - name: CONFIGMAP_NAME value: storage-checkup-config", "oc apply -n <target_namespace> -f <storage_configmap_job>.yaml", "oc wait job storage-checkup -n <target_namespace> --for condition=complete --timeout 10m", "oc get configmap storage-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: storage-checkup-config labels: kiagnose/checkup-type: kubevirt-storage data: spec.timeout: 10m status.succeeded: \"true\" 1 status.failureReason: \"\" 2 status.startTimestamp: \"2023-07-31T13:14:38Z\" 3 status.completionTimestamp: \"2023-07-31T13:19:41Z\" 4 status.result.cnvVersion: 4.17.2 5 status.result.defaultStorageClass: trident-nfs 6 status.result.goldenImagesNoDataSource: <data_import_cron_list> 7 status.result.goldenImagesNotUpToDate: <data_import_cron_list> 8 status.result.ocpVersion: 4.17.0 9 status.result.pvcBound: \"true\" 10 status.result.storageProfileMissingVolumeSnapshotClass: <storage_class_list> 11 status.result.storageProfilesWithEmptyClaimPropertySets: <storage_profile_list> 12 status.result.storageProfilesWithSmartClone: <storage_profile_list> 13 status.result.storageProfilesWithSpecClaimPropertySets: <storage_profile_list> 14 status.result.storageProfilesWithRWX: |- ocs-storagecluster-ceph-rbd ocs-storagecluster-ceph-rbd-virtualization ocs-storagecluster-cephfs trident-iscsi trident-minio trident-nfs windows-vms status.result.vmBootFromGoldenImage: VMI \"vmi-under-test-dhkb8\" successfully booted status.result.vmHotplugVolume: |- VMI \"vmi-under-test-dhkb8\" hotplug volume ready VMI \"vmi-under-test-dhkb8\" hotplug volume removed status.result.vmLiveMigration: VMI \"vmi-under-test-dhkb8\" migration completed status.result.vmVolumeClone: 'DV cloneType: \"csi-clone\"' status.result.vmsWithNonVirtRbdStorageClass: <vm_list> 15 status.result.vmsWithUnsetEfsStorageClass: <vm_list> 16", "oc delete job -n <target_namespace> storage-checkup", "oc delete config-map -n <target_namespace> storage-checkup-config", "oc delete -f <storage_sa_roles_rolebinding>.yaml", "--- apiVersion: v1 kind: ServiceAccount metadata: name: dpdk-checkup-sa --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kiagnose-configmap-access rules: - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [ \"get\", \"update\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kiagnose-configmap-access subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kiagnose-configmap-access --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: kubevirt-dpdk-checker rules: - apiGroups: [ \"kubevirt.io\" ] resources: [ \"virtualmachineinstances\" ] verbs: [ \"create\", \"get\", \"delete\" ] - apiGroups: [ \"subresources.kubevirt.io\" ] resources: [ \"virtualmachineinstances/console\" ] verbs: [ \"get\" ] - apiGroups: [ \"\" ] resources: [ \"configmaps\" ] verbs: [ \"create\", \"delete\" ] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubevirt-dpdk-checker subjects: - kind: ServiceAccount name: dpdk-checkup-sa roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubevirt-dpdk-checker", "oc apply -n <target_namespace> -f <dpdk_sa_roles_rolebinding>.yaml", "apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config labels: kiagnose/checkup-type: kubevirt-dpdk data: spec.timeout: 10m spec.param.networkAttachmentDefinitionName: <network_name> 1 spec.param.trafficGenContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0 2 spec.param.vmUnderTestContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0\" 3", "oc apply -n <target_namespace> -f <dpdk_config_map>.yaml", "apiVersion: batch/v1 kind: Job metadata: name: dpdk-checkup labels: kiagnose/checkup-type: kubevirt-dpdk spec: backoffLimit: 0 template: spec: serviceAccountName: dpdk-checkup-sa restartPolicy: Never containers: - name: dpdk-checkup image: registry.redhat.io/container-native-virtualization/kubevirt-dpdk-checkup-rhel9:v4.17.0 imagePullPolicy: Always securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] runAsNonRoot: true seccompProfile: type: \"RuntimeDefault\" env: - name: CONFIGMAP_NAMESPACE value: <target-namespace> - name: CONFIGMAP_NAME value: dpdk-checkup-config - name: POD_UID valueFrom: fieldRef: fieldPath: metadata.uid", "oc apply -n <target_namespace> -f <dpdk_job>.yaml", "oc wait job dpdk-checkup -n <target_namespace> --for condition=complete --timeout 10m", "oc get configmap dpdk-checkup-config -n <target_namespace> -o yaml", "apiVersion: v1 kind: ConfigMap metadata: name: dpdk-checkup-config labels: kiagnose/checkup-type: kubevirt-dpdk data: spec.timeout: 10m spec.param.NetworkAttachmentDefinitionName: \"dpdk-network-1\" spec.param.trafficGenContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-traffic-gen:v0.4.0\" spec.param.vmUnderTestContainerDiskImage: \"quay.io/kiagnose/kubevirt-dpdk-checkup-vm:v0.4.0\" status.succeeded: \"true\" 1 status.failureReason: \"\" 2 status.startTimestamp: \"2023-07-31T13:14:38Z\" 3 status.completionTimestamp: \"2023-07-31T13:19:41Z\" 4 status.result.trafficGenSentPackets: \"480000000\" 5 status.result.trafficGenOutputErrorPackets: \"0\" 6 status.result.trafficGenInputErrorPackets: \"0\" 7 status.result.trafficGenActualNodeName: worker-dpdk1 8 status.result.vmUnderTestActualNodeName: worker-dpdk2 9 status.result.vmUnderTestReceivedPackets: \"480000000\" 10 status.result.vmUnderTestRxDroppedPackets: \"0\" 11 status.result.vmUnderTestTxDroppedPackets: \"0\" 12", "oc delete job -n <target_namespace> dpdk-checkup", "oc delete config-map -n <target_namespace> dpdk-checkup-config", "oc delete -f <dpdk_sa_roles_rolebinding>.yaml", "dnf install guestfs-tools", "composer-cli distros list", "usermod -a -G weldr <user>", "newgrp weldr", "cat << EOF > dpdk-vm.toml name = \"dpdk_image\" description = \"Image to use with the DPDK checkup\" version = \"0.0.1\" distro = \"rhel-9.4\" [[customizations.user]] name = \"root\" password = \"redhat\" [[packages]] name = \"dpdk\" [[packages]] name = \"dpdk-tools\" [[packages]] name = \"driverctl\" [[packages]] name = \"tuned-profiles-cpu-partitioning\" [customizations.kernel] append = \"default_hugepagesz=1GB hugepagesz=1G hugepages=1\" [customizations.services] disabled = [\"NetworkManager-wait-online\", \"sshd\"] EOF", "composer-cli blueprints push dpdk-vm.toml", "composer-cli compose start dpdk_image qcow2", "composer-cli compose status", "composer-cli compose image <UUID>", "cat <<EOF >customize-vm #!/bin/bash Setup hugepages mount mkdir -p /mnt/huge echo \"hugetlbfs /mnt/huge hugetlbfs defaults,pagesize=1GB 0 0\" >> /etc/fstab Create vfio-noiommu.conf echo \"options vfio enable_unsafe_noiommu_mode=1\" > /etc/modprobe.d/vfio-noiommu.conf Enable guest-exec,guest-exec-status on the qemu-guest-agent configuration sed -i 's/\\(--allow-rpcs=[^\"]*\\)/\\1,guest-exec-status,guest-exec/' /etc/sysconfig/qemu-ga Disable Bracketed-paste mode echo \"set enable-bracketed-paste off\" >> /root/.inputrc EOF", "virt-customize -a <UUID>-disk.qcow2 --run=customize-vm --selinux-relabel", "cat << EOF > Dockerfile FROM scratch COPY --chown=107:107 <UUID>-disk.qcow2 /disk/ EOF", "podman build . -t dpdk-rhel:latest", "podman push dpdk-rhel:latest", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_vcpu_wait_seconds_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_network_receive_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_network_transmit_bytes_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_read_traffic_bytes_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_write_traffic_bytes_total[6m]))) > 0 1", "kubevirt_vmsnapshot_disks_restored_from_source{vm_name=\"simple-vm\", vm_namespace=\"default\"} 1", "kubevirt_vmsnapshot_disks_restored_from_source_bytes{vm_name=\"simple-vm\", vm_namespace=\"default\"} 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_read_total[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_storage_iops_write_total[6m]))) > 0 1", "topk(3, sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_in_traffic_bytes[6m])) + sum by (name, namespace) (rate(kubevirt_vmi_memory_swap_out_traffic_bytes[6m]))) > 0 1", "kind: Service apiVersion: v1 metadata: name: node-exporter-service 1 namespace: dynamation 2 labels: servicetype: metrics 3 spec: ports: - name: exmet 4 protocol: TCP port: 9100 5 targetPort: 9100 6 type: ClusterIP selector: monitor: metrics 7", "oc create -f node-exporter-service.yaml", "wget https://github.com/prometheus/node_exporter/releases/download/v1.3.1/node_exporter-1.3.1.linux-amd64.tar.gz", "sudo tar xvf node_exporter-1.3.1.linux-amd64.tar.gz --directory /usr/bin --strip 1 \"*/node_exporter\"", "[Unit] Description=Prometheus Metrics Exporter After=network.target StartLimitIntervalSec=0 [Service] Type=simple Restart=always RestartSec=1 User=root ExecStart=/usr/bin/node_exporter [Install] WantedBy=multi-user.target", "sudo systemctl enable node_exporter.service sudo systemctl start node_exporter.service", "curl http://localhost:9100/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5244e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.0449e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.7913e-05", "spec: template: metadata: labels: monitor: metrics", "oc get service -n <namespace> <node-exporter-service>", "curl http://<172.30.226.162:9100>/metrics | grep -vE \"^#|^USD\"", "node_arp_entries{device=\"eth0\"} 1 node_boot_time_seconds 1.643153218e+09 node_context_switches_total 4.4938158e+07 node_cooling_device_cur_state{name=\"0\",type=\"Processor\"} 0 node_cooling_device_max_state{name=\"0\",type=\"Processor\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"nice\"} 0 node_cpu_guest_seconds_total{cpu=\"0\",mode=\"user\"} 0 node_cpu_seconds_total{cpu=\"0\",mode=\"idle\"} 1.10586485e+06 node_cpu_seconds_total{cpu=\"0\",mode=\"iowait\"} 37.61 node_cpu_seconds_total{cpu=\"0\",mode=\"irq\"} 233.91 node_cpu_seconds_total{cpu=\"0\",mode=\"nice\"} 551.47 node_cpu_seconds_total{cpu=\"0\",mode=\"softirq\"} 87.3 node_cpu_seconds_total{cpu=\"0\",mode=\"steal\"} 86.12 node_cpu_seconds_total{cpu=\"0\",mode=\"system\"} 464.15 node_cpu_seconds_total{cpu=\"0\",mode=\"user\"} 1075.2 node_disk_discard_time_seconds_total{device=\"vda\"} 0 node_disk_discard_time_seconds_total{device=\"vdb\"} 0 node_disk_discarded_sectors_total{device=\"vda\"} 0 node_disk_discarded_sectors_total{device=\"vdb\"} 0 node_disk_discards_completed_total{device=\"vda\"} 0 node_disk_discards_completed_total{device=\"vdb\"} 0 node_disk_discards_merged_total{device=\"vda\"} 0 node_disk_discards_merged_total{device=\"vdb\"} 0 node_disk_info{device=\"vda\",major=\"252\",minor=\"0\"} 1 node_disk_info{device=\"vdb\",major=\"252\",minor=\"16\"} 1 node_disk_io_now{device=\"vda\"} 0 node_disk_io_now{device=\"vdb\"} 0 node_disk_io_time_seconds_total{device=\"vda\"} 174 node_disk_io_time_seconds_total{device=\"vdb\"} 0.054 node_disk_io_time_weighted_seconds_total{device=\"vda\"} 259.79200000000003 node_disk_io_time_weighted_seconds_total{device=\"vdb\"} 0.039 node_disk_read_bytes_total{device=\"vda\"} 3.71867136e+08 node_disk_read_bytes_total{device=\"vdb\"} 366592 node_disk_read_time_seconds_total{device=\"vda\"} 19.128 node_disk_read_time_seconds_total{device=\"vdb\"} 0.039 node_disk_reads_completed_total{device=\"vda\"} 5619 node_disk_reads_completed_total{device=\"vdb\"} 96 node_disk_reads_merged_total{device=\"vda\"} 5 node_disk_reads_merged_total{device=\"vdb\"} 0 node_disk_write_time_seconds_total{device=\"vda\"} 240.66400000000002 node_disk_write_time_seconds_total{device=\"vdb\"} 0 node_disk_writes_completed_total{device=\"vda\"} 71584 node_disk_writes_completed_total{device=\"vdb\"} 0 node_disk_writes_merged_total{device=\"vda\"} 19761 node_disk_writes_merged_total{device=\"vdb\"} 0 node_disk_written_bytes_total{device=\"vda\"} 2.007924224e+09 node_disk_written_bytes_total{device=\"vdb\"} 0", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: node-exporter-metrics-monitor name: node-exporter-metrics-monitor 1 namespace: dynamation 2 spec: endpoints: - interval: 30s 3 port: exmet 4 scheme: http selector: matchLabels: servicetype: metrics", "oc create -f node-exporter-metrics-monitor.yaml", "oc expose service -n <namespace> <node_exporter_service_name>", "oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host", "NAME DNS node-exporter-service node-exporter-service-dynamation.apps.cluster.example.org", "curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics", "go_gc_duration_seconds{quantile=\"0\"} 1.5382e-05 go_gc_duration_seconds{quantile=\"0.25\"} 3.1163e-05 go_gc_duration_seconds{quantile=\"0.5\"} 3.8546e-05 go_gc_duration_seconds{quantile=\"0.75\"} 4.9139e-05 go_gc_duration_seconds{quantile=\"1\"} 0.000189423", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: downwardMetrics: true", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: featureGates: downwardMetrics: false", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/downwardMetrics\" \"value\": true}]'", "oc patch hco kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/downwardMetrics\" \"value\": false}]'", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: fedora namespace: default spec: dataVolumeTemplates: - metadata: name: fedora-volume spec: sourceRef: kind: DataSource name: fedora namespace: openshift-virtualization-os-images storage: resources: {} storageClassName: hostpath-csi-basic instancetype: name: u1.medium preference: name: fedora running: true template: metadata: labels: app.kubernetes.io/name: headless spec: domain: devices: downwardMetrics: {} 1 subdomain: headless volumes: - dataVolume: name: fedora-volume name: rootdisk - cloudInitNoCloud: userData: | #cloud-config chpasswd: expire: false password: '<password>' 2 user: fedora name: cloudinitdisk", "sudo sh -c 'printf \"GET /metrics/XML\\n\\n\" > /dev/virtio-ports/org.github.vhostmd.1'", "sudo cat /dev/virtio-ports/org.github.vhostmd.1", "sudo dnf install -y vm-dump-metrics", "sudo vm-dump-metrics", "<metrics> <metric type=\"string\" context=\"host\"> <name>HostName</name> <value>node01</value> [...] <metric type=\"int64\" context=\"host\" unit=\"s\"> <name>Time</name> <value>1619008605</value> </metric> <metric type=\"string\" context=\"host\"> <name>VirtualizationVendor</name> <value>kubevirt.io</value> </metric> </metrics>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6", "oc create -f <file_name>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog name: <vm-name> spec: running: false template: metadata: labels: kubevirt.io/vm: vm2-rhel84-watchdog spec: domain: devices: watchdog: name: <watchdog> i6300esb: action: \"poweroff\" 1", "oc apply -f <file_name>.yaml", "lspci | grep watchdog -i", "echo c > /proc/sysrq-trigger", "pkill -9 watchdog", "yum install watchdog", "#watchdog-device = /dev/watchdog", "systemctl enable --now watchdog.service", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: annotations: name: fedora-vm namespace: example-namespace spec: template: spec: readinessProbe: guestAgentPing: {} 1 initialDelaySeconds: 120 2 periodSeconds: 20 3 timeoutSeconds: 10 4 failureThreshold: 3 5 successThreshold: 3 6", "oc create -f <file_name>.yaml", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5 -- /usr/bin/gather", "oc adm must-gather --all-images", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5 -- <environment_variable_1> <environment_variable_2> <script_name>", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5 -- PROS=5 /usr/bin/gather 1", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5 -- NS=mynamespace VM=my-vm /usr/bin/gather --vms_details 1", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5 /usr/bin/gather --images", "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.17.5 /usr/bin/gather --instancetypes", "oc get events -n <namespace>", "oc describe <resource> <resource_name>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: logVerbosityConfig: kubevirt: virtAPI: 5 1 virtController: 4 virtHandler: 3 virtLauncher: 2 virtOperator: 6", "oc get pods -n openshift-cnv", "NAME READY STATUS RESTARTS AGE disks-images-provider-7gqbc 1/1 Running 0 32m disks-images-provider-vg4kx 1/1 Running 0 32m virt-api-57fcc4497b-7qfmc 1/1 Running 0 31m virt-api-57fcc4497b-tx9nc 1/1 Running 0 31m virt-controller-76c784655f-7fp6m 1/1 Running 0 30m virt-controller-76c784655f-f4pbd 1/1 Running 0 30m virt-handler-2m86x 1/1 Running 0 30m virt-handler-9qs6z 1/1 Running 0 30m virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32m", "oc logs -n openshift-cnv <pod_name>", "{\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373695Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"set verbosity to 2\",\"pos\":\"virt-handler.go:453\",\"timestamp\":\"2022-04-17T08:58:37.373726Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"setting rate limiter to 5 QPS and 10 Burst\",\"pos\":\"virt-handler.go:462\",\"timestamp\":\"2022-04-17T08:58:37.373782Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"CPU features of a minimum baseline CPU model: map[apic:true clflush:true cmov:true cx16:true cx8:true de:true fpu:true fxsr:true lahf_lm:true lm:true mca:true mce:true mmx:true msr:true mtrr:true nx:true pae:true pat:true pge:true pni:true pse:true pse36:true sep:true sse:true sse2:true sse4.1:true ssse3:true syscall:true tsc:true]\",\"pos\":\"cpu_plugin.go:96\",\"timestamp\":\"2022-04-17T08:58:37.390221Z\"} {\"component\":\"virt-handler\",\"level\":\"warning\",\"msg\":\"host model mode is expected to contain only one model\",\"pos\":\"cpu_plugin.go:103\",\"timestamp\":\"2022-04-17T08:58:37.390263Z\"} {\"component\":\"virt-handler\",\"level\":\"info\",\"msg\":\"node-labeller is running\",\"pos\":\"node_labeller.go:94\",\"timestamp\":\"2022-04-17T08:58:37.391011Z\"}", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: virtualMachineOptions: disableSerialConsoleLog: true 1 #", "oc edit vm <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: logSerialConsole: true 1 #", "oc apply vm <vm_name>", "virtctl restart <vm_name> -n <namespace>", "oc logs -n <namespace> -l kubevirt.io/domain=<vm_name> --tail=-1 -c guest-console-log", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"storage\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"deployment\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"network\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"compute\"", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |kubernetes_labels_app_kubernetes_io_component=\"schedule\"", "{log_type=~\".+\",kubernetes_container_name=~\"<container>|<container>\"} 1 |json|kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\"", "{log_type=~\".+\", kubernetes_container_name=\"compute\"}|json |!= \"custom-ga-command\" 1", "{log_type=~\".+\"}|json |kubernetes_labels_app_kubernetes_io_part_of=\"hyperconverged-cluster\" |= \"error\" != \"timeout\"", "oc describe dv <DataVolume>", "Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found", "Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready", "oc get kubevirt kubevirt-hyperconverged -n openshift-cnv -o yaml", "spec: developerConfiguration: featureGates: - Snapshot", "apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineSnapshot metadata: name: <snapshot_name> spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name>", "oc create -f <snapshot_name>.yaml", "oc wait <vm_name> <snapshot_name> --for condition=Ready", "oc describe vmsnapshot <snapshot_name>", "apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineSnapshot metadata: creationTimestamp: \"2020-09-30T14:41:51Z\" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: \"3897\" selfLink: /apis/snapshot.kubevirt.io/v1beta1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"False\" 1 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"True\" 2 type: Ready creationTime: \"2020-09-30T14:42:03Z\" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4 indications: 5 - Online includedVolumes: 6 - name: rootdisk kind: PersistentVolumeClaim namespace: default - name: datadisk1 kind: DataVolume namespace: default", "apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineRestore metadata: name: <vm_restore> spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> virtualMachineSnapshotName: <snapshot_name>", "oc create -f <vm_restore>.yaml", "oc get vmrestore <vm_restore>", "apiVersion: snapshot.kubevirt.io/v1beta1 kind: VirtualMachineRestore metadata: creationTimestamp: \"2020-09-30T14:46:27Z\" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: \"5512\" selfLink: /apis/snapshot.kubevirt.io/v1beta1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"False\" 2 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"True\" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: \"2020-09-30T14:46:28Z\" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1", "oc delete vmsnapshot <snapshot_name>", "oc get vmsnapshot", "apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14", "oc get all -n openshift-adp", "NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s", "oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'", "{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}", "oc get backupstoragelocations.velero.io -n openshift-adp", "NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/virtualization/index
Chapter 2. Performing Cache Operations with the Data Grid CLI
Chapter 2. Performing Cache Operations with the Data Grid CLI Use the command line interface (CLI) to perform operations on remote caches such as creating caches, manipulating data, and rebalancing. 2.1. Creating remote caches with the Data Grid CLI Use the Data Grid Command Line Interface (CLI) to add remote caches on Data Grid Server. Prerequisites Create a Data Grid user with admin permissions. Start at least one Data Grid Server instance. Have a Data Grid cache configuration. Procedure Start the CLI. Run the connect command and enter your username and password when prompted. Use the create cache command to create remote caches. For example, create a cache named "mycache" from a file named mycache.xml as follows: Verification List all remote caches with the ls command. View cache configuration with the describe command. 2.1.1. Cache configuration You can create declarative cache configuration in XML, JSON, and YAML format. All declarative caches must conform to the Data Grid schema. Configuration in JSON format must follow the structure of an XML configuration, elements correspond to objects and attributes correspond to fields. Important Data Grid restricts characters to a maximum of 255 for a cache name or a cache template name. If you exceed this character limit, Data Grid throws an exception. Write succinct cache names and cache template names. Important A file system might set a limitation for the length of a file name, so ensure that a cache's name does not exceed this limitation. If a cache name exceeds a file system's naming limitation, general operations or initialing operations towards that cache might fail. Write succinct file names. Distributed caches XML <distributed-cache owners="2" segments="256" capacity-factor="1.0" l1-lifespan="5000" mode="SYNC" statistics="true"> <encoding media-type="application/x-protostream"/> <locking isolation="REPEATABLE_READ"/> <transaction mode="FULL_XA" locking="OPTIMISTIC"/> <expiration lifespan="5000" max-idle="1000" /> <memory max-count="1000000" when-full="REMOVE"/> <indexing enabled="true" storage="local-heap"> <index-reader refresh-interval="1000"/> <indexed-entities> <indexed-entity>org.infinispan.Person</indexed-entity> </indexed-entities> </indexing> <partition-handling when-split="ALLOW_READ_WRITES" merge-policy="PREFERRED_NON_NULL"/> <persistence passivation="false"> <!-- Persistent storage configuration. --> </persistence> </distributed-cache> JSON { "distributed-cache": { "mode": "SYNC", "owners": "2", "segments": "256", "capacity-factor": "1.0", "l1-lifespan": "5000", "statistics": "true", "encoding": { "media-type": "application/x-protostream" }, "locking": { "isolation": "REPEATABLE_READ" }, "transaction": { "mode": "FULL_XA", "locking": "OPTIMISTIC" }, "expiration" : { "lifespan" : "5000", "max-idle" : "1000" }, "memory": { "max-count": "1000000", "when-full": "REMOVE" }, "indexing" : { "enabled" : true, "storage" : "local-heap", "index-reader" : { "refresh-interval" : "1000" }, "indexed-entities": [ "org.infinispan.Person" ] }, "partition-handling" : { "when-split" : "ALLOW_READ_WRITES", "merge-policy" : "PREFERRED_NON_NULL" }, "persistence" : { "passivation" : false } } } YAML distributedCache: mode: "SYNC" owners: "2" segments: "256" capacityFactor: "1.0" l1Lifespan: "5000" statistics: "true" encoding: mediaType: "application/x-protostream" locking: isolation: "REPEATABLE_READ" transaction: mode: "FULL_XA" locking: "OPTIMISTIC" expiration: lifespan: "5000" maxIdle: "1000" memory: maxCount: "1000000" whenFull: "REMOVE" indexing: enabled: "true" storage: "local-heap" indexReader: refreshInterval: "1000" indexedEntities: - "org.infinispan.Person" partitionHandling: whenSplit: "ALLOW_READ_WRITES" mergePolicy: "PREFERRED_NON_NULL" persistence: passivation: "false" # Persistent storage configuration. Replicated caches XML <replicated-cache segments="256" mode="SYNC" statistics="true"> <encoding media-type="application/x-protostream"/> <locking isolation="REPEATABLE_READ"/> <transaction mode="FULL_XA" locking="OPTIMISTIC"/> <expiration lifespan="5000" max-idle="1000" /> <memory max-count="1000000" when-full="REMOVE"/> <indexing enabled="true" storage="local-heap"> <index-reader refresh-interval="1000"/> <indexed-entities> <indexed-entity>org.infinispan.Person</indexed-entity> </indexed-entities> </indexing> <partition-handling when-split="ALLOW_READ_WRITES" merge-policy="PREFERRED_NON_NULL"/> <persistence passivation="false"> <!-- Persistent storage configuration. --> </persistence> </replicated-cache> JSON { "replicated-cache": { "mode": "SYNC", "segments": "256", "statistics": "true", "encoding": { "media-type": "application/x-protostream" }, "locking": { "isolation": "REPEATABLE_READ" }, "transaction": { "mode": "FULL_XA", "locking": "OPTIMISTIC" }, "expiration" : { "lifespan" : "5000", "max-idle" : "1000" }, "memory": { "max-count": "1000000", "when-full": "REMOVE" }, "indexing" : { "enabled" : true, "storage" : "local-heap", "index-reader" : { "refresh-interval" : "1000" }, "indexed-entities": [ "org.infinispan.Person" ] }, "partition-handling" : { "when-split" : "ALLOW_READ_WRITES", "merge-policy" : "PREFERRED_NON_NULL" }, "persistence" : { "passivation" : false } } } YAML replicatedCache: mode: "SYNC" segments: "256" statistics: "true" encoding: mediaType: "application/x-protostream" locking: isolation: "REPEATABLE_READ" transaction: mode: "FULL_XA" locking: "OPTIMISTIC" expiration: lifespan: "5000" maxIdle: "1000" memory: maxCount: "1000000" whenFull: "REMOVE" indexing: enabled: "true" storage: "local-heap" indexReader: refreshInterval: "1000" indexedEntities: - "org.infinispan.Person" partitionHandling: whenSplit: "ALLOW_READ_WRITES" mergePolicy: "PREFERRED_NON_NULL" persistence: passivation: "false" # Persistent storage configuration. Multiple caches XML <infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:infinispan:config:15.0 https://infinispan.org/schemas/infinispan-config-15.0.xsd urn:infinispan:server:15.0 https://infinispan.org/schemas/infinispan-server-15.0.xsd" xmlns="urn:infinispan:config:15.0" xmlns:server="urn:infinispan:server:15.0"> <cache-container name="default" statistics="true"> <distributed-cache name="mycacheone" mode="ASYNC" statistics="true"> <encoding media-type="application/x-protostream"/> <expiration lifespan="300000"/> <memory max-size="400MB" when-full="REMOVE"/> </distributed-cache> <distributed-cache name="mycachetwo" mode="SYNC" statistics="true"> <encoding media-type="application/x-protostream"/> <expiration lifespan="300000"/> <memory max-size="400MB" when-full="REMOVE"/> </distributed-cache> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "name" : "default", "statistics" : "true", "caches" : { "mycacheone" : { "distributed-cache" : { "mode": "ASYNC", "statistics": "true", "encoding": { "media-type": "application/x-protostream" }, "expiration" : { "lifespan" : "300000" }, "memory": { "max-size": "400MB", "when-full": "REMOVE" } } }, "mycachetwo" : { "distributed-cache" : { "mode": "SYNC", "statistics": "true", "encoding": { "media-type": "application/x-protostream" }, "expiration" : { "lifespan" : "300000" }, "memory": { "max-size": "400MB", "when-full": "REMOVE" } } } } } } } YAML infinispan: cacheContainer: name: "default" statistics: "true" caches: mycacheone: distributedCache: mode: "ASYNC" statistics: "true" encoding: mediaType: "application/x-protostream" expiration: lifespan: "300000" memory: maxSize: "400MB" whenFull: "REMOVE" mycachetwo: distributedCache: mode: "SYNC" statistics: "true" encoding: mediaType: "application/x-protostream" expiration: lifespan: "300000" memory: maxSize: "400MB" whenFull: "REMOVE" Additional resources Data Grid configuration schema reference infinispan-config-15.0.xsd 2.2. Modifying Data Grid cache configuration Make changes to your remote cache configuration with the Data Grid CLI. You can modify attributes in your cache configuration either one at a time or provide a cache configuration in XML, JSON or YAML format to modify several attributes at once. Prerequisites Create at least one remote cache on your Data Grid cluster. Procedure Create a CLI connection to Data Grid. Modify the cache configuration with the alter command in one of the following ways: Use the --file option to specify a configuration file with one or more attribute modifications. Use the --attribute and --value option to modify a specific configuration attribute. Tip For more information and examples, run the help alter command. Verify your changes with the describe command, for example: 2.3. Adding Cache Entries Create key:value pair entries in the data container. Prerequisites Create a Data Grid cache that can store your data. Procedure Create a CLI connection to Data Grid. Add entries into your cache as follows: Use the --cache= with the put command: Use the put command from the context of a cache: Use the get command to verify entries. 2.4. Clearing Caches and Deleting Entries Remove data from caches with the Data Grid CLI. Procedure Create a CLI connection to Data Grid. Do one of the following: Delete all entries with the clearcache command. Remove specific entries with the remove command. 2.5. Deleting Caches Drop caches to remove them and delete all data they contain. Procedure Create a CLI connection to Data Grid. Remove caches with the drop command. 2.6. Configuring Automatic Cache Rebalancing By default, Data Grid automatically rebalances caches as nodes join and leave the cluster. You can configure automatic cache rebalancing by disabling or enabling it at the Cache Manager level or on a per-cache basis. Procedure Create a CLI connection to Data Grid. Disable automatic rebalancing for all caches with the rebalance disable command. Enable automatic rebalancing for a specific cache with the rebalance enable command. The following example enables rebalancing for the cache named "mycache" only. Re-enable automatic rebalancing for all caches. For more information about the rebalance command, run help rebalance . 2.7. Set a Stable Topology By default, after a cluster shutdown, Data Grid waits for all nodes to join the cluster and restore the topology. However, we offer a CLI command to mark the current topology stable for a specific cache. Important The command does not operate on internal caches. There will be a loss of functionality for caches requiring access to the internal caches with missing members. For example, users will be unable to upload schemas to the internal cache for Protobuf schemas when there are missing nodes. Script execution and upload, and distributed locks are similarly affected. Procedure Create a CLI connection to Data Grid. Do one of the following: Set the current topology as stable for the given cache. If the number of nodes missing from the current topology is more than or equal to the number of owners, the force flag is necessary to confirm the operation. For more information about the topology set-stable command, run topology set-stable -h . Important Manually installing a topology can lead to data loss, only perform this operation if the initial topology cannot be recreated.
[ "bin/cli.sh", "create cache --file=mycache.xml mycache", "ls caches mycache", "describe caches/mycache", "<distributed-cache owners=\"2\" segments=\"256\" capacity-factor=\"1.0\" l1-lifespan=\"5000\" mode=\"SYNC\" statistics=\"true\"> <encoding media-type=\"application/x-protostream\"/> <locking isolation=\"REPEATABLE_READ\"/> <transaction mode=\"FULL_XA\" locking=\"OPTIMISTIC\"/> <expiration lifespan=\"5000\" max-idle=\"1000\" /> <memory max-count=\"1000000\" when-full=\"REMOVE\"/> <indexing enabled=\"true\" storage=\"local-heap\"> <index-reader refresh-interval=\"1000\"/> <indexed-entities> <indexed-entity>org.infinispan.Person</indexed-entity> </indexed-entities> </indexing> <partition-handling when-split=\"ALLOW_READ_WRITES\" merge-policy=\"PREFERRED_NON_NULL\"/> <persistence passivation=\"false\"> <!-- Persistent storage configuration. --> </persistence> </distributed-cache>", "{ \"distributed-cache\": { \"mode\": \"SYNC\", \"owners\": \"2\", \"segments\": \"256\", \"capacity-factor\": \"1.0\", \"l1-lifespan\": \"5000\", \"statistics\": \"true\", \"encoding\": { \"media-type\": \"application/x-protostream\" }, \"locking\": { \"isolation\": \"REPEATABLE_READ\" }, \"transaction\": { \"mode\": \"FULL_XA\", \"locking\": \"OPTIMISTIC\" }, \"expiration\" : { \"lifespan\" : \"5000\", \"max-idle\" : \"1000\" }, \"memory\": { \"max-count\": \"1000000\", \"when-full\": \"REMOVE\" }, \"indexing\" : { \"enabled\" : true, \"storage\" : \"local-heap\", \"index-reader\" : { \"refresh-interval\" : \"1000\" }, \"indexed-entities\": [ \"org.infinispan.Person\" ] }, \"partition-handling\" : { \"when-split\" : \"ALLOW_READ_WRITES\", \"merge-policy\" : \"PREFERRED_NON_NULL\" }, \"persistence\" : { \"passivation\" : false } } }", "distributedCache: mode: \"SYNC\" owners: \"2\" segments: \"256\" capacityFactor: \"1.0\" l1Lifespan: \"5000\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" locking: isolation: \"REPEATABLE_READ\" transaction: mode: \"FULL_XA\" locking: \"OPTIMISTIC\" expiration: lifespan: \"5000\" maxIdle: \"1000\" memory: maxCount: \"1000000\" whenFull: \"REMOVE\" indexing: enabled: \"true\" storage: \"local-heap\" indexReader: refreshInterval: \"1000\" indexedEntities: - \"org.infinispan.Person\" partitionHandling: whenSplit: \"ALLOW_READ_WRITES\" mergePolicy: \"PREFERRED_NON_NULL\" persistence: passivation: \"false\" # Persistent storage configuration.", "<replicated-cache segments=\"256\" mode=\"SYNC\" statistics=\"true\"> <encoding media-type=\"application/x-protostream\"/> <locking isolation=\"REPEATABLE_READ\"/> <transaction mode=\"FULL_XA\" locking=\"OPTIMISTIC\"/> <expiration lifespan=\"5000\" max-idle=\"1000\" /> <memory max-count=\"1000000\" when-full=\"REMOVE\"/> <indexing enabled=\"true\" storage=\"local-heap\"> <index-reader refresh-interval=\"1000\"/> <indexed-entities> <indexed-entity>org.infinispan.Person</indexed-entity> </indexed-entities> </indexing> <partition-handling when-split=\"ALLOW_READ_WRITES\" merge-policy=\"PREFERRED_NON_NULL\"/> <persistence passivation=\"false\"> <!-- Persistent storage configuration. --> </persistence> </replicated-cache>", "{ \"replicated-cache\": { \"mode\": \"SYNC\", \"segments\": \"256\", \"statistics\": \"true\", \"encoding\": { \"media-type\": \"application/x-protostream\" }, \"locking\": { \"isolation\": \"REPEATABLE_READ\" }, \"transaction\": { \"mode\": \"FULL_XA\", \"locking\": \"OPTIMISTIC\" }, \"expiration\" : { \"lifespan\" : \"5000\", \"max-idle\" : \"1000\" }, \"memory\": { \"max-count\": \"1000000\", \"when-full\": \"REMOVE\" }, \"indexing\" : { \"enabled\" : true, \"storage\" : \"local-heap\", \"index-reader\" : { \"refresh-interval\" : \"1000\" }, \"indexed-entities\": [ \"org.infinispan.Person\" ] }, \"partition-handling\" : { \"when-split\" : \"ALLOW_READ_WRITES\", \"merge-policy\" : \"PREFERRED_NON_NULL\" }, \"persistence\" : { \"passivation\" : false } } }", "replicatedCache: mode: \"SYNC\" segments: \"256\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" locking: isolation: \"REPEATABLE_READ\" transaction: mode: \"FULL_XA\" locking: \"OPTIMISTIC\" expiration: lifespan: \"5000\" maxIdle: \"1000\" memory: maxCount: \"1000000\" whenFull: \"REMOVE\" indexing: enabled: \"true\" storage: \"local-heap\" indexReader: refreshInterval: \"1000\" indexedEntities: - \"org.infinispan.Person\" partitionHandling: whenSplit: \"ALLOW_READ_WRITES\" mergePolicy: \"PREFERRED_NON_NULL\" persistence: passivation: \"false\" # Persistent storage configuration.", "<infinispan xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"urn:infinispan:config:15.0 https://infinispan.org/schemas/infinispan-config-15.0.xsd urn:infinispan:server:15.0 https://infinispan.org/schemas/infinispan-server-15.0.xsd\" xmlns=\"urn:infinispan:config:15.0\" xmlns:server=\"urn:infinispan:server:15.0\"> <cache-container name=\"default\" statistics=\"true\"> <distributed-cache name=\"mycacheone\" mode=\"ASYNC\" statistics=\"true\"> <encoding media-type=\"application/x-protostream\"/> <expiration lifespan=\"300000\"/> <memory max-size=\"400MB\" when-full=\"REMOVE\"/> </distributed-cache> <distributed-cache name=\"mycachetwo\" mode=\"SYNC\" statistics=\"true\"> <encoding media-type=\"application/x-protostream\"/> <expiration lifespan=\"300000\"/> <memory max-size=\"400MB\" when-full=\"REMOVE\"/> </distributed-cache> </cache-container> </infinispan>", "{ \"infinispan\" : { \"cache-container\" : { \"name\" : \"default\", \"statistics\" : \"true\", \"caches\" : { \"mycacheone\" : { \"distributed-cache\" : { \"mode\": \"ASYNC\", \"statistics\": \"true\", \"encoding\": { \"media-type\": \"application/x-protostream\" }, \"expiration\" : { \"lifespan\" : \"300000\" }, \"memory\": { \"max-size\": \"400MB\", \"when-full\": \"REMOVE\" } } }, \"mycachetwo\" : { \"distributed-cache\" : { \"mode\": \"SYNC\", \"statistics\": \"true\", \"encoding\": { \"media-type\": \"application/x-protostream\" }, \"expiration\" : { \"lifespan\" : \"300000\" }, \"memory\": { \"max-size\": \"400MB\", \"when-full\": \"REMOVE\" } } } } } } }", "infinispan: cacheContainer: name: \"default\" statistics: \"true\" caches: mycacheone: distributedCache: mode: \"ASYNC\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" expiration: lifespan: \"300000\" memory: maxSize: \"400MB\" whenFull: \"REMOVE\" mycachetwo: distributedCache: mode: \"SYNC\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" expiration: lifespan: \"300000\" memory: maxSize: \"400MB\" whenFull: \"REMOVE\"", "describe caches/mycache", "put --cache=mycache hello world", "[//containers/default/caches/mycache]> put hello world", "[//containers/default/caches/mycache]> get hello world", "clearcache mycache", "remove --cache=mycache hello", "drop cache mycache", "rebalance disable", "rebalance enable caches/mycache", "rebalance enable", "topology set-stable cacheName", "topology set-stable cacheName -f" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/using_the_data_grid_command_line_interface/cache-operations
23.7. Memory Tuning
23.7. Memory Tuning <domain> ... <memtune> <hard_limit unit='G'>1</hard_limit> <soft_limit unit='M'>128</soft_limit> <swap_hard_limit unit='G'>2</swap_hard_limit> <min_guarantee unit='bytes'>67108864</min_guarantee> </memtune> ... </domain> Figure 23.9. Memory tuning Although <memtune> is optional, the components of this section of the domain XML are as follows: Table 23.5. Memory tuning elements Element Description <memtune> Provides details regarding the memory tunable parameters for the domain. If this is omitted, it defaults to the operating system provided defaults. As parameters are applied to the process as a whole, when setting limits, determine values by adding the guest virtual machine RAM to the guest virtual machine video RAM, allowing for some memory overhead. For each tunable, it is possible to designate which unit the number is in on input, using the same values as for <memory> . For backwards compatibility, output is always in kibibytes (KiB). <hard_limit> The maximum memory the guest virtual machine can use. This value is expressed in kibibytes (blocks of 1024 bytes). <soft_limit> The memory limit to enforce during memory contention. This value is expressed in kibibytes (blocks of 1024 bytes). <swap_hard_limit> The maximum memory plus swap the guest virtual machine can use. This value is expressed in kibibytes (blocks of 1024 bytes). This must be more than <hard_limit> value. <min_guarantee> The guaranteed minimum memory allocation for the guest virtual machine. This value is expressed in kibibytes (blocks of 1024 bytes).
[ "<domain> <memtune> <hard_limit unit='G'>1</hard_limit> <soft_limit unit='M'>128</soft_limit> <swap_hard_limit unit='G'>2</swap_hard_limit> <min_guarantee unit='bytes'>67108864</min_guarantee> </memtune> </domain>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-memory_tuning
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_and_allocating_storage_resources/making-open-source-more-inclusive
14.3. The (Non-transactional) CarMart Quickstart Using JBoss EAP
14.3. The (Non-transactional) CarMart Quickstart Using JBoss EAP The Carmart (non-transactional) quickstart is supported for JBoss Data Grid's Library Mode with the JBoss EAP container. Report a bug 14.3.1. Quickstart Prerequisites The prerequisites for this quickstart are as follows: Java 6.0 (Java SDK 1.6) or better JBoss Enterprise Application Platform 6.x or JBoss Enterprise Web Server 2.x Maven 3.0 or better Configure the Maven Repository. For details, see Chapter 3, Install and Use the Maven Repositories Report a bug 14.3.2. Build and Deploy the CarMart Quickstart to JBoss EAP The following procedure provides directions to build and deploy the CarMart application to JBoss EAP. Prerequisites Prerequisites for this procedure are as follows: Obtain the supported JBoss Data Grid Library Mode distribution files. Ensure that the JBoss Data Grid and JBoss Enterprise Application Platform Maven repositories are installed and configured. For details, see Chapter 3, Install and Use the Maven Repositories Select a JBoss server to use (JBoss Enterprise Application Platform 6 (or better) or JBoss EAP 6 (or better). Procedure 14.1. Build and Deploy CarMart to JBoss EAP Start JBoss EAP Depending on your operating system, use the appropriate command from the following to start the first instance of your selected application server: For Linux users: For Windows users: Navigate to the Root Directory Open a command line and navigate to the root directory of this quickstart. Build and Deploy the Application Use the following command to build and deploy the application using Maven: Result The target war file ( target/ jboss-carmart.war ) is deployed to the running instance of JBoss EAP. Report a bug 14.3.3. View the CarMart Quickstart on JBoss EAP The following procedure outlines how to view the CarMart quickstart on JBoss EAP: Prerequisite The CarMart quickstart must be built and deployed to be viewed. Procedure 14.2. View the CarMart Quickstart on JBoss EAP To view the application, use your browser to navigate to the following link: Report a bug 14.3.4. Remove the CarMart Quickstart from JBoss EAP The following procedure provides directions to remove a deployed application from JBoss EAP. Procedure 14.3. Remove an Application from JBoss EAP To remove an application, use the following from the root directory of this quickstart: Report a bug
[ "USDJBOSS_HOME/bin/standalone.sh", "USDJBOSS_HOME\\bin\\standalone.bat", "mvn clean package jboss-as:deploy", "http://localhost:8080/jboss-carmart", "mvn jboss-as:undeploy" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/sect-The_Non-transactional_CarMart_Quickstart_Using_JBoss_EAP
Chapter 4. Deploy standalone Multicloud Object Gateway in internal mode
Chapter 4. Deploy standalone Multicloud Object Gateway in internal mode Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component in internal mode, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Note Deploying standalone Multicloud Object Gateway component is not supported in external mode deployments. 4.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. Each node should include one disk and requires 3 disks (PVs). However, one PV remains eventually unused by default. This is an expected behavior. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.13 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 4.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/deploy-standalone-multicloud-object-gateway
Chapter 4. Creating a test plan
Chapter 4. Creating a test plan 4.1. Overview of test plan A hardware certification engineer creates a test plan by following these steps: Define the model by its specification. Determine the option. Remove unsupported operating system features and unintentional hardware. Apply the minimum test set criteria. Add the install, boot, and kdump requirements. Add additional policy requirements. After performing the steps above, the items remaining determine the test plan for your hardware. The Hardware Catalog records the test plan under the Test Plan Progress . Additional resources For more information about defining the testing required for each hardware class item, see Hardware class requirements . Note Red Hat Hardware Certification Test Plans are not meant to substitute for proper and complete internal quality assurance testing, criteria, and processes. Each vendor is responsible for their own internal shipment criteria and is encouraged to do testing in excess of the required certification test plan items. 4.2. Models The Red Hat Hardware Certification program certifies entire hardware models, rather than specific configurations of models. A model includes all Integrated Hardware and Optional Hardware as described by the Hardware Partner in the hardware specification. 4.2.1. Model names Model names are required to be unique and have a particular hardware specification. The Red Hat Hardware Certification program supports tiered naming schemes. A tiered naming scheme is any naming scheme that includes a hierarchical collection of models and submodels. When employing tiered naming schemes for the purposes of certification the specification is considered to include all submodels which would reasonably be represented by the name provided in the certification request. For example; consider three model names: 3000, 3000a, and 3000s. If 3000 represents the collection that includes the 3000a and 3000s models, submitting 3000 as the model name incorporates the specifications of 3000a and 3000s. Conversely, if 3000s is submitted, the specification is limited to only the hardware detailed in the 3000s specification. In cases where 3000 is a distinct model separate from 3000a and 3000s, the certification will only consider the hardware outlined in the 3000 specification. The published model name may be altered for clarity in certain situations. Any changes should be discussed during the certification process and prior to publication. Such alterations are made at the discretion of Red Hat. 4.3. Specifications To maintain consistency and accuracy in hardware specifications, ensure to provide the same specifications to both Red Hat and customers. Also, follow the listed guidelines to guarantee precise and comprehensive hardware specifications: Provide a publicly accessible URL containing all available specifications for the hardware. This URL should host the finalized, public specifications. Early specifications can be submitted before the product's official launch, even in different formats. However, these early specifications must align with the format and structure of the final customer specifications, as Red Hat will validate them against the finalized public specifications before publication. For any generalized specifications, you must provide precise and detailed information to Red Hat. For example, while mentioning "10gig Ethernet," you must specify the manufacturer such as Broadcom, Intel, Mellanox, or any other specific variant. Additionally, we recommend including the specific device model, like 'Intel 40GbE XL710-QDA', this allows Red Hat in creating test plans more efficiently. In cases where generalized specifications can be interpreted in multiple ways or cover a range of possibilities, you must provide clear details to Red Hat. For example, if your hardware can potentially support up to 80 cores, but you intend to offer only 40 cores, Red Hat will consider the 40-core limit for certification purposes. However, If you plan to offer more than 40 cores to customers, a supplemental certification will be necessary before such configurations can be made available. 4.4. Types of options It is important to understand the different types of options that can be associated with a model when certifying hardware. These options help define what components are included and how they impact the certification process. 4.4.1. Integrated hardware Integrated Hardware is the hardware required to be present in all configurations of a model. All integrated hardware components, including CPU options, memory options, integrated graphic controllers, integrated displays, and other non-field-removable hardware within a model, must be tested. This includes features integrated into System-on-Chip (SoC), System-in-Package (SiP), and other fully or partially integrated system solution designs. Specific portions of integrated hardware may be exempted from the certification if they provide features that meet the exclusion criteria specified in the non-os features and system processors table section. 4.4.2. Optional hardware Optional Hardware is hardware that is present in some configurations of a model. Testing of Optional Hardware is not required in the following conditions: The Optional Hardware is field removable. It does not provide a unique function within the model [1] It is clearly indicated for use with another operating system [2] It is clearly marked to disclose any Service Level impacts, as necessary, on either the model specification or the model support URL, and on all materials using the Red Hat Hardware Certification marks in association with the model. 4.4.3. Additional hardware Additional Hardware is hardware that can be purchased in addition to but is not included as part of any configuration of the model and is not required to be tested. Additional Hardware may appear on the model specification but must be clearly identifiable. Check KB articles attached to the certification listing of the particular additional hardware for more information. 4.4.4. Special cases The Hardware policy changes may be used when Optional Hardware or a series of CPUs causes a minor release higher than initially desired during the original certification process. This may allow the testing and posting of the model with the desired release along with the associated Red Hat Knowledge Base Article to reflect the higher release required by the Optional Hardware or CPUs. 4.5. Non-OS features and unintentional features Hardware feature classes not offered by the operating system are not required to be tested if the remaining hardware continues to be fully functional. A Red Hat Knowledge Base Article may be added to the certification listing for clarity. An Unintentional Feature is defined as any feature offered on integrated or optional hardware that is not intentionally included by the hardware partner. This feature must not be mentioned in the hardware specification unless it is called out as not supported. Unintentional features can not be supported by the hardware partner on any OS. Unintentional features are not required to be tested if the remaining hardware continues to be fully functional, even if the provided feature is unique. We recommend that unintentional features are masked from end users where possible, i.e. by disabling or removing features from the BIOS, not providing power, not including connectors, headers, etc. to minimize confusion. A Red Hat Knowledge Base Article may be added to add clarity. Changes to unintentional features are considered to be hardware changes and subject to the hardware changes policies and requirements. Unintentional features can also cover items that are not available on all architectures. Example If an Infiniband storage controller were supported by a system vendor on the Intel 64 and AMD64 architecture only, the controller could be considered an unintentional feature for the system's i386 certification. The feature must not be supported on any i386 architecture operating system for the unintentional feature status to be granted. 4.6. Minimum Test Set The Red Hat Hardware Certification program encourages testing with all configurations including the maximum and minimum supported configuration of your hardware. It is also recognized that resourcing these configuration can be difficult due to availability, cost, timing, and other constraints. For these reasons we have defined a minimum requirements policy by hardware class in the Hardware class requirements . This column in combination with Component , Component leveraging pool and Component Pass-Through certifications . The minimum testing requirements are not intended as product release criteria and it is expected that internal Red Hat Enterprise Linux and other Red Hat product interoperability, and qualification testing is conducted in addition to and prior to certification testing. Warning All hardware used during testing is required to be part of the model specification. Similar hardware that might otherwise qualify as part of the minimum test set if it were part of the model is not accepted. For example, only those CPUs which appear in the model specification may be used. Results from other members of the same CPU product family are not accepted. The maximum supported limits for Red Hat Enterprise Linux are defined at https://access.redhat.com/articles/rhel-limits . 4.7. Installation, Boot, and Kdump requirements The installation of Red Hat Enterprise Linux may require testing via a number of mediums (Optical Media and Network for example). Additionally, all boot devices must be tested to ensure a successful boot of Red Hat Enterprise Linux. The Hardware class requirements table shows the hardware that requires installation and boot testing. A complete installation is not required to fulfill the boot testing requirement. For increased testing efficiency, Red Hat recommends combining boot and install testing where possible. For example, booting from the Red Hat Enterprise Linux installation media on a CD and performing a full installation fulfills the CD boot and installation testing requirement. Kdump is utilized in the event of a crash to capture the state of the kernel. This feature is enabled by default and must be tested to ensure this critical information can be captured properly to debug issues. Kdump testing is required on an integrated storage controller and an integrated network adapter when these items are available in the model. These requirements apply to all RHEL certifications. 4.8. Hardware class requirements Hardware Requirements by Class The Hardware Class Requirements are categorized in Compute, Management, Network, and Storage. 4.8.1. Compute The hardware features that are included in Compute are: Table 4.1, "System Processors" Table 4.2, "System Memory" Table 4.3, "System Elements" Table 4.4, "Sound" Table 4.5, "Thunderbolt Ports" Table 4.6, "USB Ports" Table 4.1. System Processors Hardware Class Catalog Features Required Tests Required Hardware Install, Boot, kdump System Processors, System-on-Chip (SoC), System-in-Package (SiP) Maximum Logical Cores CORE Maximum number of logical cores [a] and feature set from available CPUs. Install, Boot CPU Frequency Control CPUSCALING, INTEL_SST [b] , or POWER_STOP [c] Maximum number of logical cores [d] and feature set from available CPUs. HW_PROFILER or SW_PROFILER Maximum number of logical cores [e] and feature set from available CPUs. Realtime System REALTIME REALTIME [f] Maximum number of logical cores and feature set from available CPUs with the realtime kernel. [g] System Virtualization SUPPORTABLE and CORE and MEMORY on the guest SUPPORTABLE and CORE and MEMORY on the guest FV_CORE and FV_MEMORY Run on a fully virtualized guest environment. Run on the host machine. Advanced System Virtualization [h] CPU Pinning, FV_CPU_PINNING, Run on a fully virtualized guest environment. Pass-Through Storage, PCIE Pass-Through Storage, USB Pass-Through Network, PCIE Pass-Through Network, Virtual Machine Live Migration FV_USB_STORAGE_PASSTHROUGH, FV_PCIE_STORAGE_PASSTHROUGH, FV_USB_NETWORK_PASSTHROUGH, FV_PCIE_NETWORK_PASSTHROUGH, and fv_live_migration [i] Run on the host machine that has IOMMU enabled. [a] The Core clock speed, FSB speed, cache size, cache depth and manufacturing size are not considered for feature set review. [b] Available in RHEL versions 8.3 and later [c] The Core clock speed, FSB speed, cache size, cache depth and manufacturing size are not considered for feature set review. [d] The Core clock speed, FSB speed, cache size, cache depth and manufacturing size are not considered for feature set review. [e] The Core clock speed, FSB speed, cache size, cache depth and manufacturing size are not considered for feature set review. [f] These tests are used to certify the Red Hat (Red Hat Enterprise Linux for Real-Time and Red Hat OpenStack Platform for Real-Time Applications) products. [g] The memory per CPU core check has been added as per the RHEL minimum requirement memory standards, as a planning condition for the hardware certification tests namely memory, core, realtime, and all the full-virtualization tests, for RHEL8. If the memory per CPU core check does not pass, the above tests will not be added to the test plans automatically. However, they can be planned manually via CLI. [h] These features appear only on Red Hat Virtualization certifications. [i] Starting with RHEL 9.4, all fv_tests, except fv_live_migration, are supported to run on ARM systems. Leverage Notes: Equal or lesser feature set within a model. Processor/core count downward on scaling designs. Feature set and core count upgrades to existing certifications. Processor upgrades are defined as field installable physical packages and may require field installable BIOS/firmware upgrades Section 3.5.3, "Settings" . Table 4.2. System Memory Hardware Class Catalog Features Required Tests Required Hardware Install, Boot, kdump System Memory Maximum supported System memory memory Minimum of 1GB per logical core using the maximum number of logical cores. [a] [b] [c] Install, Boot, Kdump HBM Memory HBM System Memory memory_HBM_only Maximum HBM memory size using the corresponding number of logical Cores [d] Install, Boot, Kdump HBM Cache Memory memory_HBM_cache Maximum HBM memory size using the corresponding number of logical Cores [e] Install, Boot, Kdump HBM Flat Memory memory_HBM_flat Maximum HBM memory size using the corresponding number of logical Cores [f] Install, Boot, Kdump NVDIMM NVDIMM - Memory Mode [g] memory [h] Any supported NVDIMM memory size Install, Boot, Kdump NVDIMM - AppDirect Mode [i] NVDIMM [j] Any supported NVDIMM memory size Install, Boot, Kdump CXL Memory Expansion CXL Memory Expansion [k] memory_CXL Each implementation [l] with maximum supported memory size [m] [n] [o] Install, Boot, Kdump [a] Systems must be available in configurations within the memory requirements listed in the RHEL limits article [b] Additional testing is required when the maximum total memory available across system memory + HBM + NVDIMM + CXL is greater than the maximum memory limit for the architecture listed in the Red Hat Enterprise Linux Technology Capabilities and Limits article [c] Depending on the available system configurations, the required HBM memory testing may need to be conducted separately from regular system memory [d] Depending on the available system configurations, the required HBM memory testing may need to be conducted separately from regular system memory [e] Depending on the available system configurations, the required HBM memory testing may need to be conducted separately from regular system memory [f] Depending on the available system configurations, the required HBM memory testing may need to be conducted separately from regular system memory [g] Available in RHEL versions 8.0 and later [h] Additional EET testing is also required for NVDIMM - Memory Mode [i] Available in RHEL versions 8.0 and later [j] The NVDIMM test utilizes sectors [k] Available in RHEL version 9.3 and later [l] Individual testing is required for each implementation available in a single component or system model [m] Memory sizes includes all embedded or socketed options with the same model name [n] A support matrix for approved memory modules is to be provided to customers by Partners with socketed designs [o] Including physicial devices, virtual devices, and NUMA nodes Leverage Notes: Equal or lesser quantities where RAM type and memory controller match. Leverage Notes for NVDIMM Hardware Class: The storage mode is only for identical implementations with smaller or greater capacity within the OS limits. Table 4.3. System Elements Hardware Class Catalog Features Required Tests Required Hardware Install, Boot, Kdump Mainboard, Chassis, I/O Chassis, Docking Stations, Port Expanders Applicable class for the integrated and optional hardware. Applicable class tests for the integrated and optional hardware. hardware. Applicable test for each function as required by the device class(es) Install, Boot Multi-Function/Multi-Port Adapters Applicable class for each function/port Applicable class testing for each function/port [a] [b] Applicable test for each function as required by the device class(es) Install, Boot [a] Unusable ports need to be tested [b] To create multiple ports on a removable card, identical chips are replicated. Leverage may enclose multi-port. Table 4.4. Sound Hardware Class Catalog Features Required Tests Required Hardware Sound Cards Stereo Audio Playback, and Stereo Audio Record Audio Stereo record and playback as applicable HDMI Audio HDMI Audio Playback Audio HDMI Port Leverage Notes: Identical integrated chipsets+codec and removable adapters. Table 4.5. Thunderbolt Ports Hardware Class Catalog Features Required Tests Required Hardware Thunderbolt 3, Thunderbolt 4 Thunderbolt 3, Thunderbolt 4 Thunderbolt 3, Thunderbolt 4 Each port with a device with the equivalent capability hotplug Table 4.6. USB Ports Hardware Class Catalog Features Required Tests Required Hardware USB 2, USB 3 (5 Gigabit), USB C (5 Gigabit), USB 3 (10 Gigabit), USB C (10 Gigabit), USB 3 (20 Gigabit), USB C (20 Gigabit), USB 4 (20 Gigabit), USB 4 (40 Gigabit) USB 2 Ports, USB 3 (5 Gigabit) Ports, USB C (5 Gigabit) Ports, USB 3 (10 Gigabit) Ports, USB C (10 Gigabit) Ports, USB 3 (20 Gigabit) Ports, USB C (20 Gigabit) Ports, USB 4 (20 Gigabit) Ports, USB 4 (40 Gigabit) Ports USB2, USB3, USB3_5Gbps, USB3_10Gbps, USB3_20Gbps, USB4, USB4_20Gbps, or USB4_40Gbps Each port with a device with the equivalent capability hotplug. [a] [a] USB 3.1 gen2 ports that are tested only with the gen1 devices can be certified. 4.8.2. Management The hardware features that are included in Management are: Table 4.7, "Console" Table 4.8, "Power Control" Table 4.9, "Identity Management" Table 4.7. Console Hardware Class Catalog Features Required Tests Required Hardware Install, Boot, kdump Display Adapters, and Virtual Consoles Graphic Console VIDEO The lower of VRAM/VBIOS limits, panel capabilities, or 1024x768 at 24 or 32 BPP Install [a] , Boot Display Adapters Basic GPU Graphics VIDEO_DRM DRM Kernel Module supported graphics controller Display Adapters Accelerated GPU Graphics VIDEO_DRM_3D DRM Kernel Module supported graphics controller + Hardware Acceleration Supported graphics controller Laptop Panels Graphic Console LCD Video [LID] [b] Native resolution [c] [d] at adaptive or native color depths with available display + graphics controller combinations [e] [f] Install LCD backlight control backlight [g] [h] [a] Native resolutions not required during install [b] The backlight must respond to lid switch if present. [c] Compensation/Stretching does not qualify as native resolution for testing. [d] A horizontal resolution of 1360 may be used on 1366 native panels. [e] Optional graphics controllers excluded by other policies are not required to be tested. At least one display + controller combination is required for each display. [f] Display and graphics controller combinations may be clarified in a Red Hat Knowledge Base Article entry to avoid confusion. [g] Backlight test does not support external displays. [h] Available, but not required, in RHEL versions 8.0 and later certifications. Leverage Notes: Identical removable cards or integrated chips without shared memory,processor-integrated. Decreases in video memory. Table 4.8. Power Control Hardware Class Catalog Features Required Tests Required Hardware Power Management, Battery Suspend to Disk, Suspend to Memory, Battery Monitoring Battery, Lid and Suspend Required for all models capable of running from battery power. Table 4.9. Identity Management Hardware Class Catalog Features Required Tests Required Hardware fingerprintreader Fingerprint Reader fingerprintreader Built-in or External fingerprint reader 4.8.3. Network The hardware features that are included in Network are: Table 4.10, "Ethernet" Table 4.11, "Fibre Channel" Table 4.12, "Fibre Channel over Ethernet (FCoE)" Table 4.13, "iSCSI" Table 4.14, "Infiniband" Table 4.15, "iWarp" Table 4.16, "Omnipath" Table 4.17, "RDMA over Converged Ethernet (RoCE)" Table 4.18, "WiFi" Table 4.19, "Bluetooth" Table 4.10. Ethernet Hardware Class Catalog Features Required Tests Required Hardware Install, Boot, kdump Ethernet 1 Gigabit Ethernet, 2.5 Gigabit Ethernet, 5 Gigabit Ethernet, 10 Gigabit Ethernet, 20 Gigabit Ethernet, 25 Gigabit Ethernet, 40 Gigabit Ethernet, 50 Gigabit Ethernet, 100 Gigabit Ethernet, 200 Gigabit Ethernet 1GigEthernet, 2.5GigEthernet, 5GigEthernet, 10GigEthernet, 20GigEthernet, 25GigEthernet, 40GigEthernet, 50GigEthernet, 100GigEthernet, 200GigEthernet Each interface at maximum connection speed. [a] Install, Boot, kdump [a] Devices that support network partitioning are required to demonstrate both the complete bandwidth and a single partition in one or more test runs. Leverage Notes: Identical integrated chipsets and removable adapters. Table 4.11. Fibre Channel Hardware Class Catalog Features Required Tests Required Hardware Install, Boot, kdump Fibre Channel 16 Gigabit Fibre Channel, 32 Gigabit Fibre Channel, 64 Gigabit Fibre Channel, 128 Gigabit Fibre Channel Network or Storage [a] Each interface at maximum connection speed Install, Boot, kdump [a] Nominal connection speed is considered a feature. Remote attached storage devices may require additional testing. Leverage Notes: Identical integrated chipsets, removable adapters, drivers, and arrays. Table 4.12. Fibre Channel over Ethernet (FCoE) Hardware Class Catalog Features Required Tests Required Hardware Install, Boot, kdump FCoE adapters FCoE Storage [a] Each interface at the maximum connection speed Install, Boot, kdump [a] Nominal connection speed is considered a feature. Remote attached storage devices may require additional testing. Leverage Notes: Identical integrated chipsets, removable adapters, drivers, and arrays. Table 4.13. iSCSI Hardware Class Catalog Features Required Tests Required Hardware Install, Boot, kdump iSCSI Adapters iSCSI Network and Storage [a] Each interface at maximum connection speed Install, Boot, kdump [a] Nominal connection speed is considered a feature. Remote attached storage devices may require additional testing. Leverage Notes: Identical integrated chipsets, removable adapters, drivers, and arrays. Table 4.14. Infiniband Hardware Class Catalog Features Required Tests Required Hardware Install, Boot, kdump Infiniband [a] QDR Infiniband, FDR Infiniband, EDR Infiniband, HDR Infiniband, Socket Direct Infiniband_QDR, Infiniband_FDR Infiniband_EDR, Infiniband_HDR, Infiniband_Socket_Direct Each interface at maximum connection speed. [b] [c] Install, Boot, kdump [a] Multiple hosts to be connected into a single adapter by separating the PCIe interface into multiple and independent interfaces. [b] Implements a connection in hardware for efficient data delivery with minimal latency. [c] Devices that support network partitioning are required to demonstrate both the complete bandwidth and a single partition in one or more test runs. Leverage Notes: Identical integrated chipsets, removable adapters, drivers, and arrays. Table 4.15. iWarp Hardware Class Catalog Features Required Tests Required Hardware iWarp 10 Gigabit iWarp, 20 Gigabit iWarp, 25 Gigabit iWarp, 40 Gigabit iWarp, 50 Gigabit iWarp, 100 Gigabit iWarp, 200 Gigabit iWarp 10GigiWarp, 20GigiWarp, 25GigiWarp, 40GigiWarp, 50GigiWarp, 100GigiWarp, 200GigiWarp Each interface with the corresponding test for the maximum claimed connection speed.footnote: [a] [a] Devices that support network partitioning are required to demonstrate both the complete bandwidth and a single partition in one or more test runs. Leverage Notes: Identical integrated chipsets and removable adapters. Table 4.16. Omnipath Hardware Class Catalog Features Required Tests Required Hardware OmniPath OmniPath OmniPath Each interface with the corresponding test for the maximum claimed connection speed. Leverage Notes: Identical integrated chipsets, processors, and removable adapters. Table 4.17. RDMA over Converged Ethernet (RoCE) Hardware Class Catalog Features Required Tests Required Hardware RoCE 2.5 Gigabit RoCE, 5 Gigabit RoCE, 10 Gigabit RoCE, 20 Gigabit RoCE, 25 Gigabit RoCE, 40 Gigabit RoCE, 50 Gigabit RoCE, 100 Gigabit RoCE, 200 Gigabit RoCE 2.5 GigRoCE, 5 GigRoCE, 10GigRoCE, 20GigRoCE, 25GigRoCE, 40GigRoCE, 50GigRoCE, 100GigRoCE, 200GigRoCE Each interface with the corresponding test for the maximum claimed connection speed. [a] [a] Devices that support network partitioning are required to demonstrate both the complete bandwidth and a single partition in one or more test runs. Leverage Notes: Identical integrated chipsets, processors, and removable adapters. Table 4.18. WiFi Hardware Class Catalog Features Required Tests Required Hardware Install,Boot, kdump Wireless Network, Interface Adapters Wireless N, Wireless AC, USB Wireless N, USB Wireless AC, Wireless G, and USB Wireless G Wireless G, Wireless N, Wireless AC [a] , WiFi6 (Previously WirelessAX) Each interface at maximum connection in N(highest), G, B, A(lowest) order. Install,Boot [a] Red Hat Enterprise Linux 7.0 only supports 802.11ac devices at 802.11n speeds. Results will be accepted from the WirelessN test on 802.11ac devices until an erratum that provides full 802.11ac connection speeds to Red Hat Enterprise Linux 7.0 is available. Leverage Notes: Identical integrated chipsets, processors, and removable adapters. Table 4.19. Bluetooth Hardware Class Catalog Features Required Tests Required Hardware Bluetooth Bluetooth 3.x, Bluetooth 4.x, Bluetooth 5.x BLUETOOTH3, BLUETOOTH4, BLUETOOTH5 Each interface at maximum bluetooth version Leverage Notes: Identical integrated chipsets and removable adapters. 4.8.4. Storage The hardware features that are included in Storage are: Table 4.20, "HBA, HDD, and SDD" Table 4.21, "Tape" Table 4.22, "Memory Cards or Readers" Table 4.23, "Optical" Table 4.20. HBA, HDD, and SDD Hardware Class Catalog Features Required Tests Required Hardware Install, Boot, kdump M.2 NVMe, M.2 SATA, PCIe NVMe, SATA HDD, SATA SSD, SAS [a] , SAS SSD, U.2 NVMe, U.2 SATA, U.3 NVMe, E.3 NVMe M.2 NVMe, M.2 SATA, NVMe, SATA, SATA SSD, SAS, SAS SSD, U.2 NVMe, U.2 SATA, U.3 NVMe, E.3 NVMe M2_NVMe, M2_SATA, NVMe, SATA, SATA_SSD, SAS, SAS SSD, U2_NVMe (PCI Express), U2_SATA, U3_NVMe, E3_NVMe Any capacity [b] drive [c] attached to the controller or the maximum storage capacity of local attach arrays if greater than OS limit Install, Boot, kdump RAID Controllers Storage Storage Each OS code path (e.g. where multiple drivers are used) for each interface. Maximum storage capacity of arrays if greater than OS limit. Install, Boot, kdump NVMe over Fabric NVMe over Infiniband, NVMe over iWarp, NVMe over Omnipath, NVMe over RoCE, NVMe over TCP nvme_infiniband, nvme_iwarp, nvme_omnipath, nvme_roce, nvme_tcp An NVMe SSD drive shared from the test server to HUT ethernet controller sized under the maximum storage capacity of the OS limit. [a] SAS Controllers require testing with SAS drives. [b] Drive capacity is not tracked in the context of a system. [c] SSD features require SSD drives to be tested. Leverage Notes: Identical integrated chipsets, removable adapters, drives, and arrays. Leverage Notes for RAID Controllers: Identical integrated chipsets, removable adapters, drives and arrays following type criteria. Reduced RAID levels, changes in memory amounts or battery presence. Table 4.21. Tape Hardware Class Catalog Features Required Tests Required Hardware Tape Drives and Changers [a] Tape drive, Tape changer TAPE Each drive [a] Changers require manual testing with test description and results report Leverage Notes: Identical drives and changers. Internal and external versions of the same drives. Models with the same host interface, hardware and firmware designs including reduced features, capacity, media size and/or total slots and drive count in changers/libraries. Table 4.22. Memory Cards or Readers Hardware Class Catalog Features Required Tests Required Hardware Install, Boot, kdump eMMC, PCIE SD Card Reader, SD Card, USB Flash Key, USB SD Card Reader [a] eMMC, PCIE SD Card Reader, SD Card, USB Flash Key, USB SD Card Reader Storage The maximum storage capacity and format feature set Install,Boot [a] Including variants for each (eg. mini, micro, etc.). Leverage Notes: Identical integrated chipsets, removable adapters. Identical, smaller capacity or feature cards and sticks. Note Multi-Readers follow the Multi-Port Adapter criteria. Table 4.23. Optical Hardware Class Catalog Features Required Tests Required Hardware Install, Boot, kdump CD-ROM drive, DVD drive, or Blu-ray BD-RE, BD-R, Blu-ray, DVD-RW, DVD-R, DVD, CD-RW, CD-R, CD CDROM drive, DVD drive, or BLURAY The highest media type in order of BD-RW (highest), BD-R, DVD-RW [a] , DVD-R, CD-RW, CD-R, BD, DVD, CD (lowest) on each storage controller, based on the collective media support of all drives [b] available on that storage controller Install, Boot [a] "+" and "-" are considered equal for feature review. [b] The hardware partner is required to support all drives that are part of the model regardless of the specific drive or number of drives used during testing. Equivalent production cycle drive changes are required to be tested internally by the hardware partner. The production cycle drive change test results are not required to be submitted to Red Hat Leverage Notes: Drives with identical or lesser media support on the storage controller following the storage controller leveraging policies. 4.9. Additional manual testing The additional manual testing consists of the external storage and multipath HBAs. 4.9.1. External storage and multipath HBAs In addition to the base requirements for storage controllers/devices; vendors must verify that their internal quality assurance processes have tested full functionality with Red Hat Enterprise Linux under the following scenarios as appropriate: multi-controllers/single host multi-host/single controller multi-controller/multi-host with/without multi-path with/without LUN masking (i.e., dedicating LUNs to specific hosts) a short cable pull (remove cable and restore prior to failure detection) any special features listed as supported on Red Hat Enterprise Linux Testing result packages are not required to be submitted to Red Hat for the above testing. [1] The quantity of a function is not considered unique; for example, a dual and a quad Ethernet adapter with all other capabilities being the same are considered to provide the same function. [2] Notes must be in a positive tone and not a negative.
null
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_hardware_certification_program_policy_guide/assembly_creating-a-test-plan_hw-program-pol-hardware-certification-policies
Chapter 10. Updating your JBoss EAP server using the web console
Chapter 10. Updating your JBoss EAP server using the web console As a system administrator you can update your JBoss EAP installation using the web console. The JBoss EAP web console also allows you to perform other operations such as, viewing the history of your updates, reverting JBoss EAP updates to a version, and managing channels. 10.1. Prerequisites You may need access to the internet. You have created an account on the Red Hat Customer Portal and are logged in. You have installed JBoss EAP using any of the installation methods. For more information see JBoss EAP installation methods . 10.2. Updating JBoss EAP online using the web console JBoss EAP has periodic releases that contain bug and security fixes, you can use the JBoss EAP web console to keep your installation up-to-date. Procedure Open the JBoss EAP web console. Navigate to the top menu and click Update Manager . Click Updates . Click on the Update icon and click Online Updates to list the updates. Click to prepare server candidate Click to apply update Click Finish to complete the update Verification Click on the refresh icon to verify that the update was applied successfully. 10.3. Updating JBoss EAP offline using the web console JBoss EAP has periodic releases that contain bug and security fixes, you can use the JBoss EAP web console to keep your installation up-to-date using a local archive file. Note Internet access is not required to update your JBoss EAP 8.0 installation offline using the web console. Procedure Open the JBoss EAP web console. Navigate to the top menu and click Update Manager . Click Updates . Click on the Update icon and click Offline Updates to upload the archive. Choose archive file and click . Click to prepare server candidate. Click to apply update. Click Finish to complete the update. Verification Click on the refresh icon to verify that the update was applied successfully. 10.4. Viewing JBoss EAP installation history using the web console Use the JBoss EAP web console to view the complete history of updates applied to your JBoss EAP installation. Procedure Open the JBoss EAP web console. Navigate to the top menu and click Update Manager . Click Updates . Verification In the Updates column, verify that you can see a list of all updates that has been applied on your JBoss EAP installation. 10.5. Reverting to a version of JBoss EAP using the web console Use the JBoss EAP web console to revert your JBoss EAP installation to a update version. Procedure Open the JBoss EAP web console. Click Update Manager in the top menu. Click Updates . In the Updates column, select the appropriate JBoss EAP version you want to revert to. Click Revert . Click to prepare server candidate. Click to apply update. Click Finish to complete the update. Verification In the Updates column, you will see that your installation has been reverted. 10.6. Managing channels using the web console Use the JBoss EAP 8.0 web console to manage channels by enabling direct addition, removal and editing of channels through its interface. 10.6.1. Adding a channel using the web console You can add or subscribe to a channel using the JBoss EAP 8.0 web console. Procedure Add a channel: Open the JBoss EAP web console. Click Update Manager in the top menu. Click Channels . Click on the + symbol. Enter the channel details and click Add . 10.6.2. Removing a channel using the web console You can remove or unsubscribe from a channel using the JBoss EAP 8.0 web console. Procedure Remove a channel: Open the JBoss EAP web console. Click Update Manager in the top menu. Click Channels . In the Channels column click on the Channel , you will be prompted to unsubscribe, click Yes 10.6.3. Editing a channel using the web console You can edit a channel using the web console using the JBoss EAP 8.0 web console. Procedure Edit a channel: Open the JBoss EAP web console. Click Update Manager in the top menu. Click Channels . In the Channels column click on the Channel . Click on view on your desired channels. Click Edit to edit your channel. Click Save .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/updating_red_hat_jboss_enterprise_application_platform/updating_your_jboss_eap_server_using_the_web_console
Chapter 3. Installing Windows virtual machines
Chapter 3. Installing Windows virtual machines Installing a Windows virtual machine involves the following key steps: Create a blank virtual machine on which to install an operating system. Add a virtual disk for storage. Add a network interface to connect the virtual machine to the network. Attach the Windows guest tools CD to the virtual machine so that VirtIO-optimized device drivers can be installed during the operating system installation. Install a Windows operating system on the virtual machine. See your operating system's documentation for instructions. During the installation, install guest agents and drivers for additional virtual machine functionality. When all of these steps are complete, the new virtual machine is functional and ready to perform tasks. 3.1. Creating a virtual machine When creating a new virtual machine, you specify its settings. You can edit some of these settings later, including the chipset and BIOS type. For more information, see UEFI and the Q35 chipset in the Administration Guide . .Prerequisites Note Before you can use this virtual machine, you must: Install an operating system Install a VirtIO-optimized disk and network drivers Procedure You can change the default virtual machine name length with the engine-config tool. Run the following command on the Manager machine: # engine-config --set MaxVmNameLength= integer Click Compute Virtual Machines . Click New . This opens the New Virtual Machine window. Select an Operating System from the drop-down list. Enter a Name for the virtual machine. Add storage to the virtual machine: under Instance Images , click Attach or Create to select or create a virtual disk . Click Attach and select an existing virtual disk. or Click Create and enter a Size(GB) and Alias for a new virtual disk. You can accept the default settings for all other fields, or change them if required. See Explanation of settings in the New Virtual Disk and Edit Virtual Disk windows for more details on the fields for all disk types. Connect the virtual machine to the network. Add a network interface by selecting a vNIC profile from the nic1 drop-down list at the bottom of the General tab. Specify the virtual machine's Memory Size on the System tab. In the Boot Options tab, choose the First Device that the virtual machine will use to boot. You can accept the default settings for all other fields, or change them if required. For more details on all fields in the New Virtual Machine window, see Explanation of settings in the New Virtual Machine and Edit Virtual Machine Windows . Click OK . The new virtual machine is created and displays in the list of virtual machines with a status of Down .
[ "engine-config --set MaxVmNameLength= integer" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/chap-installing_windows_virtual_machines
Chapter 83. Direct Component
Chapter 83. Direct Component Available as of Camel version 1.0 The direct: component provides direct, synchronous invocation of any consumers when a producer sends a message exchange. This endpoint can be used to connect existing routes in the same camel context. Tip Asynchronous The SEDA component provides asynchronous invocation of any consumers when a producer sends a message exchange. Tip Connection to other camel contexts The VM component provides connections between Camel contexts as long they run in the same JVM . 83.1. URI format Where someName can be any string to uniquely identify the endpoint 83.2. Options The Direct component supports 3 options, which are listed below. Name Description Default Type block (producer) If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true boolean timeout (producer) The timeout value to use if block is enabled. 30000 long resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Direct endpoint is configured using URI syntax: with the following path and query parameters: 83.2.1. Path Parameters (1 parameters): Name Description Default Type name Required Name of direct endpoint String 83.2.2. Query Parameters (7 parameters): Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN/ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this options is not in use. By default the consumer will deal with exceptions, that will be logged at WARN/ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the default exchange pattern when creating an exchange. ExchangePattern block (producer) If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true boolean failIfNoConsumers (producer) Whether the producer should fail by throwing an exception, when sending to a DIRECT endpoint with no active consumers. false boolean timeout (producer) The timeout value to use if block is enabled. 30000 long synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 83.3. Samples In the route below we use the direct component to link the two routes together: from("activemq:queue:order.in") .to("bean:orderServer?method=validate") .to("direct:processOrder"); from("direct:processOrder") .to("bean:orderService?method=process") .to("activemq:queue:order.out"); And the sample using spring DSL: <route> <from uri="activemq:queue:order.in"/> <to uri="bean:orderService?method=validate"/> <to uri="direct:processOrder"/> </route> <route> <from uri="direct:processOrder"/> <to uri="bean:orderService?method=process"/> <to uri="activemq:queue:order.out"/> </route> See also samples from the SEDA component, how they can be used together. 83.4. See Also SEDA VM
[ "direct:someName[?options]", "direct:name", "from(\"activemq:queue:order.in\") .to(\"bean:orderServer?method=validate\") .to(\"direct:processOrder\"); from(\"direct:processOrder\") .to(\"bean:orderService?method=process\") .to(\"activemq:queue:order.out\");", "<route> <from uri=\"activemq:queue:order.in\"/> <to uri=\"bean:orderService?method=validate\"/> <to uri=\"direct:processOrder\"/> </route> <route> <from uri=\"direct:processOrder\"/> <to uri=\"bean:orderService?method=process\"/> <to uri=\"activemq:queue:order.out\"/> </route>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/direct-component
E.2.14. /proc/kmsg
E.2.14. /proc/kmsg This file is used to hold messages generated by the kernel. These messages are then picked up by other programs, such as /sbin/klogd or /bin/dmesg .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-kmsg
Chapter 2. Overview of certification process
Chapter 2. Overview of certification process Prerequisites Establish a certification relationship with Red Hat. Establish a test environment consisting of your product and the Red Hat product combination to be certified. Perform preliminary testing to ensure this combination works well. Install the redhat-certification tool. Procedure Create a certification request for a specific system or hardware component using redhat-certification. Red Hat's certification team applies the certification policies to the hardware specifications to create the official test plan. Run the tests specified in the official test plan and submit results using redhat-certification to Red Hat for analysis. The certification team analyzes the test results and marking credit as appropriate and communicating any required retesting. Provide Red Hat with a representative hardware sample that covers the items that are being certified. When all tests have passing results, the certification is complete and the entry is made visible to the public on the external Red Hat Hardware Certification website at Certifications . Additional resources For more information about hardware certification process, see Hardware Certification Program Overview section of the Hardware Certification Test Suite User Guide.
null
https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_hardware_certification_program_policy_guide/proc_certification-process-overview_hw-program-pol-introduction
Chapter 49. QuotasPluginStrimzi schema reference
Chapter 49. QuotasPluginStrimzi schema reference Used in: KafkaClusterSpec The type property is a discriminator that distinguishes use of the QuotasPluginStrimzi type from QuotasPluginKafka . It must have the value strimzi for the type QuotasPluginStrimzi . Property Property type Description type string Must be strimzi . producerByteRate integer A per-broker byte-rate quota for clients producing to a broker, independent of their number. If clients produce at maximum speed, the quota is shared equally between all non-excluded producers. Otherwise, the quota is divided based on each client's production rate. consumerByteRate integer A per-broker byte-rate quota for clients consuming from a broker, independent of their number. If clients consume at maximum speed, the quota is shared equally between all non-excluded consumers. Otherwise, the quota is divided based on each client's consumption rate. minAvailableBytesPerVolume integer Stop message production if the available size (in bytes) of the storage is lower than or equal to this specified value. This condition is mutually exclusive with minAvailableRatioPerVolume . minAvailableRatioPerVolume number Stop message production if the percentage of available storage space falls below or equals the specified ratio (set as a decimal representing a percentage). This condition is mutually exclusive with minAvailableBytesPerVolume . excludedPrincipals string array List of principals that are excluded from the quota. The principals have to be prefixed with User: , for example User:my-user;User:CN=my-other-user .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-QuotasPluginStrimzi-reference
Chapter 2. Installing the MTA plugin for IntelliJ IDEA
Chapter 2. Installing the MTA plugin for IntelliJ IDEA You can install the MTA plugin in the Ultimate and the Community Edition releases of IntelliJ IDEA. Prerequisites The following are the prerequisites for the Migration Toolkit for Applications (MTA) installation: Java Development Kit (JDK) is installed. MTA supports the following JDKs: OpenJDK 11 OpenJDK 17 Oracle JDK 11 Oracle JDK 17 Eclipse TemurinTM JDK 11 Eclipse TemurinTM JDK 17 8 GB RAM macOS installation: the value of maxproc must be 2048 or greater. The latest version of mta-cli from the MTA download page Procedure In IntelliJ IDEA, click the Plugins tab on the Welcome screen. Enter Migration Toolkit for Applications in the Search field on the Marketplace tab. Select the Migration Toolkit for Applications (MTA) by Red Hat plugin and click Install . The plugin is listed on the Installed tab.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/intellij_idea_plugin_guide/intellij-idea-plugin-extension_idea-plugin-guide
Chapter 2. Connecting RHEL systems directly to AD using Samba Winbind
Chapter 2. Connecting RHEL systems directly to AD using Samba Winbind To connect a RHEL system to Active Directory (AD), use: Samba Winbind to interact with the AD identity and authentication source realmd to detect available domains and configure the underlying RHEL system services. 2.1. Overview of direct integration using Samba Winbind Samba Winbind emulates a Windows client on a Linux system and communicates with AD servers. You can use the realmd service to configure Samba Winbind by: Configuring network authentication and domain membership in a standard way. Automatically discovering information about accessible domains and realms. Not requiring advanced configuration to join a domain or realm. Note that: Direct integration with Winbind in a multi-forest AD setup requires bidirectional trusts. Remote forests must trust the local forest to ensure that the idmap_ad plug-in handles remote forest users correctly. Samba's winbindd service provides an interface for the Name Service Switch (NSS) and enables domain users to authenticate to AD when logging into the local system. Using winbindd provides the benefit that you can enhance the configuration to share directories and printers without installing additional software. Additional resources Using Samba as a server realmd man page on your system winbindd man page on your system 2.2. Supported Windows platforms for direct integration You can directly integrate your RHEL system with Active Directory forests that use the following forest and domain functional levels: Forest functional level range: Windows Server 2008 - Windows Server 2016 Domain functional level range: Windows Server 2008 - Windows Server 2016 Direct integration has been tested on the following supported operating systems: Windows Server 2022 (RHEL 9.1 or later) Windows Server 2019 Windows Server 2016 Windows Server 2012 R2 Note Windows Server 2019 and Windows Server 2022 do not introduce a new functional level. The highest functional level Windows Server 2019 and Windows Server 2022 use is Windows Server 2016. 2.3. Joining a RHEL system to an AD domain Samba Winbind is an alternative to the System Security Services Daemon (SSSD) for connecting a Red Hat Enterprise Linux (RHEL) system with Active Directory (AD). You can join a RHEL system to an AD domain by using realmd to configure Samba Winbind. Procedure If your AD requires the deprecated RC4 encryption type for Kerberos authentication, enable support for these ciphers in RHEL: Install the following packages: To share directories or printers on the domain member, install the samba package: Backup the existing /etc/samba/smb.conf Samba configuration file: Join the domain. For example, to join a domain named ad.example.com : Using the command, the realm utility automatically: Creates a /etc/samba/smb.conf file for a membership in the ad.example.com domain Adds the winbind module for user and group lookups to the /etc/nsswitch.conf file Updates the Pluggable Authentication Module (PAM) configuration files in the /etc/pam.d/ directory Starts the winbind service and enables the service to start when the system boots Optional: Set an alternative ID mapping back end or customized ID mapping settings in the /etc/samba/smb.conf file. For details, see the Understanding and configuring Samba ID mapping Edit the /etc/krb5.conf file and add the following section: Verify that the winbind service is running: Important To enable Samba to query domain user and group information, the winbind service must be running before you start smb . If you installed the samba package to share directories and printers, enable and start the smb service: Verification Display an AD user's details, such as the AD administrator account in the AD domain: Query the members of the domain users group in the AD domain: Optional: Verify that you can use domain users and groups when you set permissions on files and directories. For example, to set the owner of the /srv/samba/example.txt file to AD\administrator and the group to AD\Domain Users : Verify that Kerberos authentication works as expected: On the AD domain member, obtain a ticket for the [email protected] principal: Display the cached Kerberos ticket: Display the available domains: Additional resources If you do not want to use the deprecated RC4 ciphers, you can enable the AES encryption type in AD. See Enabling the AES encryption type in Active Directory using a GPO realm(8) man page on your system 2.4. realm commands The realmd system has two major task areas: Managing system enrollment in a domain. Controlling which domain users are allowed to access local system resources. In realmd use the command line tool realm to run commands. Most realm commands require the user to specify the action that the utility should perform, and the entity, such as a domain or user account, for which to perform the action. Table 2.1. realmd commands Command Description Realm Commands discover Run a discovery scan for domains on the network. join Add the system to the specified domain. leave Remove the system from the specified domain. list List all configured domains for the system or all discovered and configured domains. Login Commands permit Enable access for specific users or for all users within a configured domain to access the local system. deny Restrict access for specific users or for all users within a configured domain to access the local system. Additional resources realm(8) man page on your system
[ "update-crypto-policies --set DEFAULT:AD-SUPPORT", "dnf install realmd oddjob-mkhomedir oddjob samba-winbind-clients samba-winbind samba-common-tools samba-winbind-krb5-locator krb5-workstation", "dnf install samba", "mv /etc/samba/smb.conf /etc/samba/smb.conf.bak", "realm join --membership-software=samba --client-software=winbind ad.example.com", "[plugins] localauth = { module = winbind:/usr/lib64/samba/krb5/winbind_krb5_localauth.so enable_only = winbind }", "systemctl status winbind Active: active (running) since Tue 2018-11-06 19:10:40 CET; 15s ago", "systemctl enable --now smb", "getent passwd \"AD\\administrator\" AD\\administrator:*:10000:10000::/home/administrator@AD:/bin/bash", "getent group \"AD\\Domain Users\" AD\\domain users:x:10000:user1,user2", "chown \"AD\\administrator\":\"AD\\Domain Users\" /srv/samba/example.txt", "kinit [email protected]", "klist Ticket cache: KCM:0 Default principal: [email protected] Valid starting Expires Service principal 01.11.2018 10:00:00 01.11.2018 20:00:00 krbtgt/[email protected] renew until 08.11.2018 05:00:00", "wbinfo --all-domains BUILTIN SAMBA-SERVER AD" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/integrating_rhel_systems_directly_with_windows_active_directory/connecting-rhel-systems-directly-to-ad-using-samba-winbind_integrating-rhel-systems-directly-with-active-directory
3.3. Configuring IP Networking with nmcli
3.3. Configuring IP Networking with nmcli The nmcli (NetworkManager Command Line Interface) command-line utility is used for controlling NetworkManager and reporting network status. It can be utilized as a replacement for nm-applet or other graphical clients. See Section 2.5, "NetworkManager Tools" . nmcli is used to create, display, edit, delete, activate, and deactivate network connections, as well as control and display network device status. The nmcli utility can be used by both users and scripts for controlling NetworkManager : For servers, headless machines, and terminals, nmcli can be used to control NetworkManager directly, without GUI, including creating, editing, starting and stopping network connections and viewing network status. For scripts, nmcli supports a terse output format which is better suited for script processing. It is a way to integrate network configuration instead of managing network connections manually. The basic format of a nmcli command is as follows: nmcli [OPTIONS] OBJECT { COMMAND | help } where OBJECT can be one of the following options: general , networking , radio , connection , device , agent , and monitor . You can use any prefix of these options in your commands. For example, nmcli con help , nmcli c help , nmcli connection help generate the same output. Some of useful optional OPTIONS to get started are: -t, terse This mode can be used for computer script processing as you can see a terse output displaying only the values. Example 3.1. Viewing a terse output -f, field This option specifies what fields can be displayed in output. For example, NAME,UUID,TYPE,AUTOCONNECT,ACTIVE,DEVICE,STATE. You can use one or more fields. If you want to use more, do not use space after comma to separate the fields. Example 3.2. Specifying Fields in the output or even better for scripting: -p, pretty This option causes nmcli to produce human-readable output. For example, values are aligned and headers are printed. Example 3.3. Viewing an output in pretty mode -h, help Prints help information. The nmcli tool has some built-in context-sensitive help: nmcli help This command lists the available options and object names to be used in subsequent commands. nmcli object help This command displays the list of available actions related to a specified object. For example, 3.3.1. Brief Selection of nmcli Examples Example 3.4. Checking the overall status of NetworkManager In terse mode: Example 3.5. Viewing NetworkManager logging status Example 3.6. Viewing all connections Example 3.7. Viewing only currently active connections Example 3.8. Viewing only devices recognized by NetworkManager and their state You can also use the following abbreviations of the nmcli commands: Table 3.1. Abbreviations of some nmcli commands nmcli command abbreviation nmcli general status nmcli g nmcli general logging nmcli g log nmcli connection show nmcli con show nmcli connection show --active nmcli con show -a nmcli device status nmcli dev For more examples, see the nmcli-examples (5) man page. 3.3.2. Starting and Stopping a Network Interface Using nmcli The nmcli tool can be used to start and stop any network interface, including controllers. For example: Note The nmcli connection down command, deactivates a connection from a device without preventing the device from further auto-activation. The nmcli device disconnect command, disconnects a device and prevent the device from automatically activating further connections without manual intervention. 3.3.3. Understanding the nmcli Options Following are some of the important nmcli property options. See the comprehensive list in the nmcli (1) man page : connection.type A connection type. Allowed values are: adsl, bond, bond-slave, bridge, bridge-slave, bluetooth, cdma, ethernet, gsm, infiniband, olpc-mesh, team, team-slave, vlan, wifi, wimax. Each connection type has type-specific command options. You can see the TYPE_SPECIFIC_OPTIONS list in the nmcli (1) man page. For example: A gsm connection requires the access point name specified in an apn . nmcli c add connection.type gsm apn access_point_name A wifi device requires the service set identifier specified in a ssid . nmcli c add connection.type wifi ssid My identifier connection.interface-name A device name relevant for the connection. nmcli con add connection.interface-name enp1s0 type ethernet connection.id A name used for the connection profile. If you do not specify a connection name, one will be generated as follows: connection.type -connection.interface-name The connection.id is the name of a connection profile and should not be confused with the interface name which denotes a device ( wlp61s0 , ens3 , em1 ). However, users can name the connections after interfaces, but they are not the same thing. There can be multiple connection profiles available for a device. This is particularly useful for mobile devices or when switching a network cable back and forth between different devices. Rather than edit the configuration, create different profiles and apply them to the interface as needed. The id option also refers to the connection profile name. The most important options for nmcli commands such as show , up , down are: id An identification string assigned by the user to a connection profile. Id can be used in nmcli connection commands to identify a connection. The NAME field in the command output always denotes the connection id. It refers to the same connection profile name that the con-name does. uuid A unique identification string assigned by the system to a connection profile. The uuid can be used in nmcli connection commands to identify a connection. 3.3.4. Using the nmcli Interactive Connection Editor The nmcli tool has an interactive connection editor. To use it: You will be prompted to enter a valid connection type from the list displayed. After entering a connection type you will be placed at the nmcli prompt. If you are familiar with the connection types you can add a valid connection type option to the nmcli con edit command and be taken straight to the nmcli prompt. The format is as follows for editing an existing connection profile: nmcli con edit [id | uuid | path] ID For editing a new connection profile: nmcli con edit [type new-connection-type ] [con-name new-connection-name ] Type help at the nmcli prompt to see a list of valid commands. Use the describe command to get a description of settings and their properties: describe setting.property For example: 3.3.5. Creating and Modifying a Connection Profile with nmcli A connection profile contains the connection property information needed to connect to a data source. To create a new profile for NetworkManager using nmcli : The nmcli c add accepts two different types of parameters: Property names the names which NetworkManager uses to describe the connection internally. The most important are: connection.type nmcli c add connection.type bond connection.interface-name nmcli c add connection.interface-name enp1s0 connection.id nmcli c add connection.id "My Connection" See the nm-settings(5) man page for more information on properties and their settings. Aliases names the human-readable names which are translated to properties internally. The most common are: type (the connection.type property) nmcli c add type bond ifname (the connection.interface-name property) nmcli c add ifname enp1s0 con-name (the connection.id property) nmcli c add con-name "My Connection" In versions of nmcli , to create a connection required using the aliases . For example, ifname enp1s0 and con-name My Connection. A command in the following format could be used: nmcli c add type ethernet ifname enp1s0 con-name "My Connection" In more recent versions, both the property names and the aliases can be used interchangeably. The following examples are all valid and equivalent: nmcli c add type ethernet ifname enp1s0 con-name "My Connection" ethernet.mtu 1600 nmcli c add connection.type ethernet ifname enp1s0 con-name "My Connection" ethernet.mtu 1600 nmcli c add connection.type ethernet connection.interface-name enps1s0 connection.id "My Connection" ethernet.mtu 1600 The arguments differ according to the connection types. Only the type argument is mandatory for all connection types and ifname is mandatory for all types except bond , team , bridge and vlan . type type_name connection type. For example: nmcli c add type bond ifname interface_name interface to bind the connection to. For example: nmcli c add ifname interface_name type ethernet To modify one or more properties of a connection profile, use the following command: For example, to change the connection.id from My Connection to My favorite connection and the connection.interface-name to enp1s0 , issue the command as follows: nmcli c modify "My Connection" connection.id "My favorite connection" connection.interface-name enp1s0 Note It is preferable to use the property names . The aliases are used only for compatibility reasons. In addition, to set the ethernet MTU to 1600, modify the size as follows: nmcli c modify "My favorite connection" ethernet.mtu 1600 To apply changes after a modified connection using nmcli, activate again the connection by entering this command: For example: 3.3.6. Connecting to a Network Using nmcli To list the currently available network connections: Note that the NAME field in the output always denotes the connection ID (name). It is not the interface name even though it might look the same. In the second connection shown above, ens3 in the NAME field is the connection ID given by the user to the profile applied to the interface ens3 . In the last connection shown, the user has assigned the connection ID MyWiFi to the interface wlp61s0 . Adding an Ethernet connection means creating a configuration profile which is then assigned to a device. Before creating a new profile, review the available devices as follows: 3.3.7. Adding and Configuring a Dynamic Ethernet Connection with nmcli Adding a Dynamic Ethernet Connection To add an Ethernet configuration profile with dynamic IP configuration, allowing DHCP to assign the network configuration: nmcli connection add type ethernet con-name connection-name ifname interface-name For example, to create a dynamic connection profile named my-office : To open the Ethernet connection: Review the status of the devices and connections: Configuring a Dynamic Ethernet Connection To change the host name sent by a host to a DHCP server, modify the dhcp-hostname property: To change the IPv4 client ID sent by a host to a DHCP server, modify the dhcp-client-id property: There is no dhcp-client-id property for IPv6 , dhclient creates an identifier for IPv6 . See the dhclient(8) man page for details. To ignore the DNS servers sent to a host by a DHCP server, modify the ignore-auto-dns property: See the nm-settings(5) man page for more information on properties and their settings. Example 3.9. Configuring a Dynamic Ethernet Connection Using the Interactive Editor To configure a dynamic Ethernet connection using the interactive editor: The default action is to save the connection profile as persistent. If required, the profile can be held in memory only, until the restart, by means of the save temporary command. 3.3.8. Adding and Configuring a Static Ethernet Connection with nmcli Adding a Static Ethernet Connection To add an Ethernet connection with static IPv4 configuration: nmcli connection add type ethernet con-name connection-name ifname interface-name ip4 address gw4 address IPv6 address and gateway information can be added using the ip6 and gw6 options. For example, to create a static Ethernet connection with only IPv4 address and gateway: Optionally, at the same time specify IPv6 address and gateway for the device: To set two IPv4 DNS server addresses: Note that this will replace any previously set DNS servers. To set two IPv6 DNS server addresses: Note that this will replace any previously set DNS servers. Alternatively, to add additional DNS servers to any previously set, use the + prefix: To open the new Ethernet connection: Review the status of the devices and connections: To view detailed information about the newly configured connection, issue a command as follows: The use of the -p, --pretty option adds a title banner and section breaks to the output. Example 3.10. Configuring a Static Ethernet Connection Using the Interactive Editor To configure a static Ethernet connection using the interactive editor: The default action is to save the connection profile as persistent. If required, the profile can be held in memory only, until the restart, by means of the save temporary command. NetworkManager will set its internal parameter connection.autoconnect to yes . NetworkManager will also write out settings to /etc/sysconfig/network-scripts/ifcfg-my-office where the corresponding BOOTPROTO will be set to none and ONBOOT to yes . Note that manual changes to the ifcfg file will not be noticed by NetworkManager until the interface is brought up. See Section 2.7, "Using NetworkManager with sysconfig files" , Section 3.5, "Configuring IP Networking with ifcfg Files" for more information on using configuration files. 3.3.9. Locking a Profile to a Specific Device Using nmcli To lock a profile to a specific interface device: nmcli connection add type ethernet con-name connection-name ifname interface-name To make a profile usable for all compatible Ethernet interfaces: nmcli connection add type ethernet con-name connection-name ifname "*" Note that you have to use the ifname argument even if you do not want to set a specific interface. Use the wildcard character * to specify that the profile can be used with any compatible device. To lock a profile to a specific MAC address: nmcli connection add type ethernet con-name " connection-name " ifname "*" mac 00:00:5E:00:53:00 3.3.10. Adding a Wi-Fi Connection with nmcli To view the available Wi-Fi access points: To create a Wi-Fi connection profile with static IP configuration, but allowing automatic DNS address assignment: To set a WPA2 password, for example " caffeine " : See the Red Hat Enterprise Linux 7 Security Guide for information on password security. To change Wi-Fi state: Changing a Specific Property Using nmcli To check a specific property, for example mtu : To change the property of a setting: To verify the change: Note that NetworkManager refers to parameters such as 802-3-ethernet and 802-11-wireless as the setting, and mtu as a property of the setting. See the nm-settings(5) man page for more information on properties and their settings. 3.3.11. Configuring NetworkManager to Ignore Certain Devices By default, NetworkManager manages all devices except the lo (loopback) device. However, you can set certain devices as unmanaged to configure that NetworkManager ignores these devices. With this setting, you can manually manage these devices, for example, using a script. 3.3.11.1. Permanently Configuring a Device as Unmanaged in NetworkManager You can configure devices as unmanaged based on several criteria, such as the interface name, MAC address, or device type. This procedure describes how to permanently set the enp1s0 interface as unmanaged in NetworkManager. To temporarily configure network devices as unmanaged , see Section 3.3.11.2, "Temporarily Configuring a Device as Unmanaged in NetworkManager" . Procedure Optional: Display the list of devices to identify the device you want to set as unmanaged : Create the /etc/NetworkManager/conf.d/99-unmanaged-devices.conf file with the following content: To set multiple devices as unmanaged, separate the entries in the unmanaged-devices parameter with semicolon: Reload the NetworkManager service: Verification Steps Display the list of devices: The unmanaged state to the enp1s0 device indicates that NetworkManager does not manage this device. Additional Resources For a list of criteria you can use to configure devices as unmanaged and the corresponding syntax, see the Device List Format section in the NetworkManager.conf (5) man page. 3.3.11.2. Temporarily Configuring a Device as Unmanaged in NetworkManager You can configure devices as unmanaged based on several criteria, such as the interface name, MAC address, or device type. This procedure describes how to temporarily set the enp1s0 interface as unmanaged in NetworkManager. Use this method, for example, for testing purposes. To permanently configure network devices as unmanaged , see Section 3.3.11.1, "Permanently Configuring a Device as Unmanaged in NetworkManager" . Procedure Optional: Display the list of devices to identify the device you want to set as unmanaged : Set the enp1s0 device to the unmanaged state: Verification Steps Display the list of devices: The unmanaged state to the enp1s0 device indicates that NetworkManager does not manage this device. Additional Resources For a list of criteria you can use to configure devices as unmanaged and the corresponding syntax, see the Device List Format section in the NetworkManager.conf (5) man page.
[ "nmcli -t device ens3:ethernet:connected:Profile 1 lo:loopback:unmanaged:", "~]USD nmcli -f DEVICE,TYPE device DEVICE TYPE ens3 ethernet lo loopback", "~]USD nmcli -t -f DEVICE,TYPE device ens3:ethernet lo:loopback", "nmcli -p device ===================== Status of devices ===================== DEVICE TYPE STATE CONNECTION -------------------------------------------------------------- ens3 ethernet connected Profile 1 lo loopback unmanaged --", "nmcli c help", "~]USD nmcli general status STATE CONNECTIVITY WIFI-HW WIFI WWAN-HW WWAN connected full enabled enabled enabled enabled", "~]USD nmcli -t -f STATE general connected", "~]USD nmcli general logging LEVEL DOMAINS INFO PLATFORM,RFKILL,ETHER,WIFI,BT,MB,DHCP4,DHCP6,PPP,WIFI_SCAN,IP4,IP6,A UTOIP4,DNS,VPN,SHARING,SUPPLICANT,AGENTS,SETTINGS,SUSPEND,CORE,DEVICE,OLPC, WIMAX,INFINIBAND,FIREWALL,ADSL,BOND,VLAN,BRIDGE,DBUS_PROPS,TEAM,CONCHECK,DC B,DISPATCH", "~]USD nmcli connection show NAME UUID TYPE DEVICE Profile 1 db1060e9-c164-476f-b2b5-caec62dc1b05 ethernet ens3 ens3 aaf6eb56-73e5-4746-9037-eed42caa8a65 ethernet --", "~]USD nmcli connection show --active NAME UUID TYPE DEVICE Profile 1 db1060e9-c164-476f-b2b5-caec62dc1b05 ethernet ens3", "~]USD nmcli device status DEVICE TYPE STATE CONNECTION ens3 ethernet connected Profile 1 lo loopback unmanaged --", "nmcli con up id bond0 nmcli con up id port0 nmcli dev disconnect bond0 nmcli dev disconnect ens3", "~]USD nmcli con edit", "nmcli> describe team.config", "nmcli c add {ARGUMENTS}", "nmcli c modify", "nmcli con up con-name", "nmcli con up My-favorite-connection Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/16)", "~]USD nmcli con show NAME UUID TYPE DEVICE Auto Ethernet 9b7f2511-5432-40ae-b091-af2457dfd988 802-3-ethernet -- ens3 fb157a65-ad32-47ed-858c-102a48e064a2 802-3-ethernet ens3 MyWiFi 91451385-4eb8-4080-8b82-720aab8328dd 802-11-wireless wlp61s0", "~]USD nmcli device status DEVICE TYPE STATE CONNECTION ens3 ethernet disconnected -- ens9 ethernet disconnected -- lo loopback unmanaged --", "~]USD nmcli con add type ethernet con-name my-office ifname ens3 Connection 'my-office' (fb157a65-ad32-47ed-858c-102a48e064a2) successfully added.", "~]USD nmcli con up my-office Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)", "~]USD nmcli device status DEVICE TYPE STATE CONNECTION ens3 ethernet connected my-office ens9 ethernet disconnected -- lo loopback unmanaged --", "~]USD nmcli con modify my-office my-office ipv4.dhcp-hostname host-name ipv6.dhcp-hostname host-name", "~]USD nmcli con modify my-office my-office ipv4.dhcp-client-id client-ID-string", "~]USD nmcli con modify my-office my-office ipv4.ignore-auto-dns yes ipv6.ignore-auto-dns yes", "~]USD nmcli con edit type ethernet con-name ens3 ===| nmcli interactive connection editor |=== Adding a new '802-3-ethernet' connection Type 'help' or '?' for available commands. Type 'describe [<setting>.<prop>]' for detailed property description. You may edit the following settings: connection, 802-3-ethernet (ethernet), 802-1x, ipv4, ipv6, dcb nmcli> describe ipv4.method === [method] === [NM property description] IPv4 configuration method. If 'auto' is specified then the appropriate automatic method (DHCP, PPP, etc) is used for the interface and most other properties can be left unset. If 'link-local' is specified, then a link-local address in the 169.254/16 range will be assigned to the interface. If 'manual' is specified, static IP addressing is used and at least one IP address must be given in the 'addresses' property. If 'shared' is specified (indicating that this connection will provide network access to other computers) then the interface is assigned an address in the 10.42.x.1/24 range and a DHCP and forwarding DNS server are started, and the interface is NAT-ed to the current default network connection. 'disabled' means IPv4 will not be used on this connection. This property must be set. nmcli> set ipv4.method auto nmcli> save Saving the connection with 'autoconnect=yes'. That might result in an immediate activation of the connection. Do you still want to save? [yes] yes Connection 'ens3' (090b61f7-540f-4dd6-bf1f-a905831fc287) successfully saved. nmcli> quit ~]USD", "~]USD nmcli con add type ethernet con-name test-lab ifname ens9 ip4 10.10.10.10/24 gw4 10.10.10.254", "~]USD nmcli con add type ethernet con-name test-lab ifname ens9 ip4 10.10.10.10/24 gw4 10.10.10.254 ip6 abbe::cafe gw6 2001:db8::1 Connection 'test-lab' (05abfd5e-324e-4461-844e-8501ba704773) successfully added.", "~]USD nmcli con mod test-lab ipv4.dns \"8.8.8.8 8.8.4.4\"", "~]USD nmcli con mod test-lab ipv6.dns \"2001:4860:4860::8888 2001:4860:4860::8844\"", "~]USD nmcli con mod test-lab +ipv4.dns \"8.8.8.8 8.8.4.4\"", "~]USD nmcli con mod test-lab +ipv6.dns \"2001:4860:4860::8888 2001:4860:4860::8844\"", "~]USD nmcli con up test-lab ifname ens9 Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)", "~]USD nmcli device status DEVICE TYPE STATE CONNECTION ens3 ethernet connected my-office ens9 ethernet connected test-lab lo loopback unmanaged --", "~]USD nmcli -p con show test-lab =============================================================================== Connection profile details (test-lab) =============================================================================== connection.id: test-lab connection.uuid: 05abfd5e-324e-4461-844e-8501ba704773 connection.interface-name: ens9 connection.type: 802-3-ethernet connection.autoconnect: yes connection.timestamp: 1410428968 connection.read-only: no connection.permissions: connection.zone: -- connection.master: -- connection.slave-type: -- connection.secondaries: connection.gateway-ping-timeout: 0 [output truncated]", "~]USD nmcli con edit type ethernet con-name ens3 ===| nmcli interactive connection editor |=== Adding a new '802-3-ethernet' connection Type 'help' or '?' for available commands. Type 'describe [>setting<.>prop<]' for detailed property description. You may edit the following settings: connection, 802-3-ethernet (ethernet), 802-1x, ipv4, ipv6, dcb nmcli> set ipv4.addresses 192.168.122.88/24 Do you also want to set 'ipv4.method' to 'manual'? [yes]: yes nmcli> nmcli> save temporary Saving the connection with 'autoconnect=yes'. That might result in an immediate activation of the connection. Do you still want to save? [yes] no nmcli> save Saving the connection with 'autoconnect=yes'. That might result in an immediate activation of the connection. Do you still want to save? [yes] yes Connection 'ens3' (704a5666-8cbd-4d89-b5f9-fa65a3dbc916) successfully saved. nmcli> quit ~]USD", "~]USD nmcli dev wifi list SSID MODE CHAN RATE SIGNAL BARS SECURITY FedoraTest Infra 11 54 MB/s 98 ▂▄▆█ WPA1 Red Hat Guest Infra 6 54 MB/s 97 ▂▄▆█ WPA2 Red Hat Infra 6 54 MB/s 77 ▂▄▆_ WPA2 802.1X * Red Hat Infra 40 54 MB/s 66 ▂▄▆_ WPA2 802.1X VoIP Infra 1 54 MB/s 32 ▂▄__ WEP MyCafe Infra 11 54 MB/s 39 ▂▄__ WPA2", "~]USD nmcli con add con-name MyCafe ifname wlp61s0 type wifi ssid MyCafe ip4 192.168.100.101/24 gw4 192.168.100.1", "~]USD nmcli con modify MyCafe wifi-sec.key-mgmt wpa-psk ~]USD nmcli con modify MyCafe wifi-sec.psk caffeine", "~]USD nmcli radio wifi [ on | off ]", "~]USD nmcli connection show id ' MyCafe ' | grep mtu 802-11-wireless.mtu: auto", "~]USD nmcli connection modify id ' MyCafe ' 802-11-wireless.mtu 1350", "~]USD nmcli connection show id ' MyCafe ' | grep mtu 802-11-wireless.mtu: 1350", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet disconnected --", "[keyfile] unmanaged-devices=interface-name: enp1s0", "[keyfile] unmanaged-devices=interface-name: interface_1 ;interface-name: interface_2 ;", "systemctl reload NetworkManager", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unmanaged --", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet disconnected --", "nmcli device set enp1s0 managed no", "nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unmanaged --" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configuring_ip_networking_with_nmcli
18.4. Installing in Text Mode
18.4. Installing in Text Mode Text mode installation offers an interactive, non-graphical interface for installing Red Hat Enterprise Linux. This can be useful on systems with no graphical capabilities; however, you should always consider the available alternatives (an automated Kickstart installation or using the graphical user interface over VNC) before starting a text-based installation. Text mode is limited in the amount of choices you can make during the installation. Figure 18.1. Text Mode Installation Installation in text mode follows a pattern similar to the graphical installation: There is no single fixed progression; you can configure many settings in any order you want using the main status screen. Screens which have already been configured, either automatically or by you, are marked as [x] , and screens which require your attention before the installation can begin are marked with [!] . Available commands are displayed below the list of available options. Note When related background tasks are being run, certain menu items can be temporarily unavailable or display the Processing... label. To refresh to the current status of text menu items, use the r option at the text mode prompt. At the bottom of the screen in text mode, a green bar is displayed showing five menu options. These options represent different screens in the tmux terminal multiplexer; by default you start in screen 1, and you can use keyboard shortcuts to switch to other screens which contain logs and an interactive command prompt. For information about available screens and shortcuts to switch to them, see Section 18.2.1, "Accessing Consoles" . Limits of interactive text mode installation include: The installer will always use the English language and the US English keyboard layout. You can configure your language and keyboard settings, but these settings will only apply to the installed system, not to the installation. You cannot configure any advanced storage methods (LVM, software RAID, FCoE, zFCP and iSCSI). It is not possible to configure custom partitioning; you must use one of the automatic partitioning settings. You also cannot configure where the boot loader will be installed. You cannot select any package add-ons to be installed; they must be added after the installation finishes using the Yum package manager. To start a text mode installation, boot the installation with the inst.text boot option used in the parameter file ( generic.prm ). See Chapter 21, Parameter and Configuration Files on IBM Z for information about the parameter file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-text-mode-s390
Chapter 8. Using RBAC to define and apply permissions
Chapter 8. Using RBAC to define and apply permissions 8.1. RBAC overview Role-based access control (RBAC) objects determine whether a user is allowed to perform a given action within a project. Cluster administrators can use the cluster roles and bindings to control who has various access levels to the OpenShift Container Platform platform itself and all projects. Developers can use local roles and bindings to control who has access to their projects. Note that authorization is a separate step from authentication, which is more about determining the identity of who is taking the action. Authorization is managed using: Authorization object Description Rules Sets of permitted verbs on a set of objects. For example, whether a user or service account can create pods. Roles Collections of rules. You can associate, or bind, users and groups to multiple roles. Bindings Associations between users and/or groups with a role. There are two levels of RBAC roles and bindings that control authorization: RBAC level Description Cluster RBAC Roles and bindings that are applicable across all projects. Cluster roles exist cluster-wide, and cluster role bindings can reference only cluster roles. Local RBAC Roles and bindings that are scoped to a given project. While local roles exist only in a single project, local role bindings can reference both cluster and local roles. A cluster role binding is a binding that exists at the cluster level. A role binding exists at the project level. The cluster role view must be bound to a user using a local role binding for that user to view the project. Create local roles only if a cluster role does not provide the set of permissions needed for a particular situation. This two-level hierarchy allows reuse across multiple projects through the cluster roles while allowing customization inside of individual projects through local roles. During evaluation, both the cluster role bindings and the local role bindings are used. For example: Cluster-wide "allow" rules are checked. Locally-bound "allow" rules are checked. Deny by default. 8.1.1. Default cluster roles OpenShift Container Platform includes a set of default cluster roles that you can bind to users and groups cluster-wide or locally. Important It is not recommended to manually modify the default cluster roles. Modifications to these system roles can prevent a cluster from functioning properly. Default cluster role Description admin A project manager. If used in a local binding, an admin has rights to view any resource in the project and modify any resource in the project except for quota. basic-user A user that can get basic information about projects and users. cluster-admin A super-user that can perform any action in any project. When bound to a user with a local binding, they have full control over quota and every action on every resource in the project. cluster-status A user that can get basic cluster status information. cluster-reader A user that can get or view most of the objects but cannot modify them. edit A user that can modify most objects in a project but does not have the power to view or modify roles or bindings. self-provisioner A user that can create their own projects. view A user who cannot make any modifications, but can see most objects in a project. They cannot view or modify roles or bindings. Be mindful of the difference between local and cluster bindings. For example, if you bind the cluster-admin role to a user by using a local role binding, it might appear that this user has the privileges of a cluster administrator. This is not the case. Binding the cluster-admin to a user in a project grants super administrator privileges for only that project to the user. That user has the permissions of the cluster role admin , plus a few additional permissions like the ability to edit rate limits, for that project. This binding can be confusing via the web console UI, which does not list cluster role bindings that are bound to true cluster administrators. However, it does list local role bindings that you can use to locally bind cluster-admin . The relationships between cluster roles, local roles, cluster role bindings, local role bindings, users, groups and service accounts are illustrated below. Warning The get pods/exec , get pods/* , and get * rules grant execution privileges when they are applied to a role. Apply the principle of least privilege and assign only the minimal RBAC rights required for users and agents. For more information, see RBAC rules allow execution privileges . 8.1.2. Evaluating authorization OpenShift Container Platform evaluates authorization by using: Identity The user name and list of groups that the user belongs to. Action The action you perform. In most cases, this consists of: Project : The project you access. A project is a Kubernetes namespace with additional annotations that allows a community of users to organize and manage their content in isolation from other communities. Verb : The action itself: get , list , create , update , delete , deletecollection , or watch . Resource name : The API endpoint that you access. Bindings The full list of bindings, the associations between users or groups with a role. OpenShift Container Platform evaluates authorization by using the following steps: The identity and the project-scoped action is used to find all bindings that apply to the user or their groups. Bindings are used to locate all the roles that apply. Roles are used to find all the rules that apply. The action is checked against each rule to find a match. If no matching rule is found, the action is then denied by default. Tip Remember that users and groups can be associated with, or bound to, multiple roles at the same time. Project administrators can use the CLI to view local roles and bindings, including a matrix of the verbs and resources each are associated with. Important The cluster role bound to the project administrator is limited in a project through a local binding. It is not bound cluster-wide like the cluster roles granted to the cluster-admin or system:admin . Cluster roles are roles defined at the cluster level but can be bound either at the cluster level or at the project level. 8.1.2.1. Cluster role aggregation The default admin, edit, view, and cluster-reader cluster roles support cluster role aggregation , where the cluster rules for each role are dynamically updated as new rules are created. This feature is relevant only if you extend the Kubernetes API by creating custom resources. 8.2. Projects and namespaces A Kubernetes namespace provides a mechanism to scope resources in a cluster. The Kubernetes documentation has more information on namespaces. Namespaces provide a unique scope for: Named resources to avoid basic naming collisions. Delegated management authority to trusted users. The ability to limit community resource consumption. Most objects in the system are scoped by namespace, but some are excepted and have no namespace, including nodes and users. A project is a Kubernetes namespace with additional annotations and is the central vehicle by which access to resources for regular users is managed. A project allows a community of users to organize and manage their content in isolation from other communities. Users must be given access to projects by administrators, or if allowed to create projects, automatically have access to their own projects. Projects can have a separate name , displayName , and description . The mandatory name is a unique identifier for the project and is most visible when using the CLI tools or API. The maximum name length is 63 characters. The optional displayName is how the project is displayed in the web console (defaults to name ). The optional description can be a more detailed description of the project and is also visible in the web console. Each project scopes its own set of: Object Description Objects Pods, services, replication controllers, etc. Policies Rules for which users can or cannot perform actions on objects. Constraints Quotas for each kind of object that can be limited. Service accounts Service accounts act automatically with designated access to objects in the project. Cluster administrators can create projects and delegate administrative rights for the project to any member of the user community. Cluster administrators can also allow developers to create their own projects. Developers and administrators can interact with projects by using the CLI or the web console. 8.3. Default projects OpenShift Container Platform comes with a number of default projects, and projects starting with openshift- are the most essential to users. These projects host master components that run as pods and other infrastructure components. The pods created in these namespaces that have a critical pod annotation are considered critical, and the have guaranteed admission by kubelet. Pods created for master components in these namespaces are already marked as critical. Note You cannot assign an SCC to pods created in one of the default namespaces: default , kube-system , kube-public , openshift-node , openshift-infra , and openshift . You cannot use these namespaces for running pods or services. 8.4. Viewing cluster roles and bindings You can use the oc CLI to view cluster roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the cluster roles and bindings. Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing cluster roles and bindings. Procedure To view the cluster roles and their associated rule sets: USD oc describe clusterrole.rbac Example output Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*] ... To view the current set of cluster role bindings, which shows the users and groups that are bound to various roles: USD oc describe clusterrolebinding.rbac Example output Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api ... 8.5. Viewing local roles and bindings You can use the oc CLI to view local roles and bindings by using the oc describe command. Prerequisites Install the oc CLI. Obtain permission to view the local roles and bindings: Users with the cluster-admin default cluster role bound cluster-wide can perform any action on any resource, including viewing local roles and bindings. Users with the admin default cluster role bound locally can view and manage roles and bindings in that project. Procedure To view the current set of local role bindings, which show the users and groups that are bound to various roles for the current project: USD oc describe rolebinding.rbac To view the local role bindings for a different project, add the -n flag to the command: USD oc describe rolebinding.rbac -n joe-project Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project 8.6. Adding roles to users You can use the oc adm administrator CLI to manage the roles and bindings. Binding, or adding, a role to users or groups gives the user or group the access that is granted by the role. You can add and remove roles to and from users and groups using oc adm policy commands. You can bind any of the default cluster roles to local users or groups in your project. Procedure Add a role to a user in a specific project: USD oc adm policy add-role-to-user <role> <user> -n <project> For example, you can add the admin role to the alice user in joe project by running: USD oc adm policy add-role-to-user admin alice -n joe Tip You can alternatively apply the following YAML to add the role to the user: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice View the local role bindings and verify the addition in the output: USD oc describe rolebinding.rbac -n <project> For example, to view the local role bindings for the joe project: USD oc describe rolebinding.rbac -n joe Example output Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa... Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe 1 The alice user has been added to the admins RoleBinding . 8.7. Creating a local role You can create a local role for a project and then bind it to a user. Procedure To create a local role for a project, run the following command: USD oc create role <name> --verb=<verb> --resource=<resource> -n <project> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to <project> , the project name For example, to create a local role that allows a user to view pods in the blue project, run the following command: USD oc create role podview --verb=get --resource=pod -n blue To bind the new role to a user, run the following command: USD oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue 8.8. Creating a cluster role You can create a cluster role. Procedure To create a cluster role, run the following command: USD oc create clusterrole <name> --verb=<verb> --resource=<resource> In this command, specify: <name> , the local role's name <verb> , a comma-separated list of the verbs to apply to the role <resource> , the resources that the role applies to For example, to create a cluster role that allows a user to view pods, run the following command: USD oc create clusterrole podviewonly --verb=get --resource=pod 8.9. Local role binding commands When you manage a user or group's associated roles for local role bindings using the following operations, a project may be specified with the -n flag. If it is not specified, then the current project is used. You can use the following commands for local RBAC management. Table 8.1. Local role binding operations Command Description USD oc adm policy who-can <verb> <resource> Indicates which users can perform an action on a resource. USD oc adm policy add-role-to-user <role> <username> Binds a specified role to specified users in the current project. USD oc adm policy remove-role-from-user <role> <username> Removes a given role from specified users in the current project. USD oc adm policy remove-user <username> Removes specified users and all of their roles in the current project. USD oc adm policy add-role-to-group <role> <groupname> Binds a given role to specified groups in the current project. USD oc adm policy remove-role-from-group <role> <groupname> Removes a given role from specified groups in the current project. USD oc adm policy remove-group <groupname> Removes specified groups and all of their roles in the current project. 8.10. Cluster role binding commands You can also manage cluster role bindings using the following operations. The -n flag is not used for these operations because cluster role bindings use non-namespaced resources. Table 8.2. Cluster role binding operations Command Description USD oc adm policy add-cluster-role-to-user <role> <username> Binds a given role to specified users for all projects in the cluster. USD oc adm policy remove-cluster-role-from-user <role> <username> Removes a given role from specified users for all projects in the cluster. USD oc adm policy add-cluster-role-to-group <role> <groupname> Binds a given role to specified groups for all projects in the cluster. USD oc adm policy remove-cluster-role-from-group <role> <groupname> Removes a given role from specified groups for all projects in the cluster. 8.11. Creating a cluster admin The cluster-admin role is required to perform administrator level tasks on the OpenShift Container Platform cluster, such as modifying cluster resources. Prerequisites You must have created a user to define as the cluster admin. Procedure Define the user as a cluster admin: USD oc adm policy add-cluster-role-to-user cluster-admin <user>
[ "oc describe clusterrole.rbac", "Name: admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- .packages.apps.redhat.com [] [] [* create update patch delete get list watch] imagestreams [] [] [create delete deletecollection get list patch update watch create get list watch] imagestreams.image.openshift.io [] [] [create delete deletecollection get list patch update watch create get list watch] secrets [] [] [create delete deletecollection get list patch update watch get list watch create delete deletecollection patch update] buildconfigs/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates [] [] [create delete deletecollection get list patch update watch get list watch] routes [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances [] [] [create delete deletecollection get list patch update watch get list watch] templates [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io/scale [] [] [create delete deletecollection get list patch update watch get list watch] deploymentconfigs.apps.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io/webhooks [] [] [create delete deletecollection get list patch update watch get list watch] buildconfigs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] buildlogs.build.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamimages.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreammappings.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] imagestreamtags.image.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] routes.route.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] processedtemplates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateconfigs.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templateinstances.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] templates.template.openshift.io [] [] [create delete deletecollection get list patch update watch get list watch] serviceaccounts [] [] [create delete deletecollection get list patch update watch impersonate create delete deletecollection patch update get list watch] imagestreams/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings [] [] [create delete deletecollection get list patch update watch] roles [] [] [create delete deletecollection get list patch update watch] rolebindings.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] roles.authorization.openshift.io [] [] [create delete deletecollection get list patch update watch] imagestreams.image.openshift.io/secrets [] [] [create delete deletecollection get list patch update watch] rolebindings.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] roles.rbac.authorization.k8s.io [] [] [create delete deletecollection get list patch update watch] networkpolicies.extensions [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] networkpolicies.networking.k8s.io [] [] [create delete deletecollection patch update create delete deletecollection get list patch update watch get list watch] configmaps [] [] [create delete deletecollection patch update get list watch] endpoints [] [] [create delete deletecollection patch update get list watch] persistentvolumeclaims [] [] [create delete deletecollection patch update get list watch] pods [] [] [create delete deletecollection patch update get list watch] replicationcontrollers/scale [] [] [create delete deletecollection patch update get list watch] replicationcontrollers [] [] [create delete deletecollection patch update get list watch] services [] [] [create delete deletecollection patch update get list watch] daemonsets.apps [] [] [create delete deletecollection patch update get list watch] deployments.apps/scale [] [] [create delete deletecollection patch update get list watch] deployments.apps [] [] [create delete deletecollection patch update get list watch] replicasets.apps/scale [] [] [create delete deletecollection patch update get list watch] replicasets.apps [] [] [create delete deletecollection patch update get list watch] statefulsets.apps/scale [] [] [create delete deletecollection patch update get list watch] statefulsets.apps [] [] [create delete deletecollection patch update get list watch] horizontalpodautoscalers.autoscaling [] [] [create delete deletecollection patch update get list watch] cronjobs.batch [] [] [create delete deletecollection patch update get list watch] jobs.batch [] [] [create delete deletecollection patch update get list watch] daemonsets.extensions [] [] [create delete deletecollection patch update get list watch] deployments.extensions/scale [] [] [create delete deletecollection patch update get list watch] deployments.extensions [] [] [create delete deletecollection patch update get list watch] ingresses.extensions [] [] [create delete deletecollection patch update get list watch] replicasets.extensions/scale [] [] [create delete deletecollection patch update get list watch] replicasets.extensions [] [] [create delete deletecollection patch update get list watch] replicationcontrollers.extensions/scale [] [] [create delete deletecollection patch update get list watch] poddisruptionbudgets.policy [] [] [create delete deletecollection patch update get list watch] deployments.apps/rollback [] [] [create delete deletecollection patch update] deployments.extensions/rollback [] [] [create delete deletecollection patch update] catalogsources.operators.coreos.com [] [] [create update patch delete get list watch] clusterserviceversions.operators.coreos.com [] [] [create update patch delete get list watch] installplans.operators.coreos.com [] [] [create update patch delete get list watch] packagemanifests.operators.coreos.com [] [] [create update patch delete get list watch] subscriptions.operators.coreos.com [] [] [create update patch delete get list watch] buildconfigs/instantiate [] [] [create] buildconfigs/instantiatebinary [] [] [create] builds/clone [] [] [create] deploymentconfigrollbacks [] [] [create] deploymentconfigs/instantiate [] [] [create] deploymentconfigs/rollback [] [] [create] imagestreamimports [] [] [create] localresourceaccessreviews [] [] [create] localsubjectaccessreviews [] [] [create] podsecuritypolicyreviews [] [] [create] podsecuritypolicyselfsubjectreviews [] [] [create] podsecuritypolicysubjectreviews [] [] [create] resourceaccessreviews [] [] [create] routes/custom-host [] [] [create] subjectaccessreviews [] [] [create] subjectrulesreviews [] [] [create] deploymentconfigrollbacks.apps.openshift.io [] [] [create] deploymentconfigs.apps.openshift.io/instantiate [] [] [create] deploymentconfigs.apps.openshift.io/rollback [] [] [create] localsubjectaccessreviews.authorization.k8s.io [] [] [create] localresourceaccessreviews.authorization.openshift.io [] [] [create] localsubjectaccessreviews.authorization.openshift.io [] [] [create] resourceaccessreviews.authorization.openshift.io [] [] [create] subjectaccessreviews.authorization.openshift.io [] [] [create] subjectrulesreviews.authorization.openshift.io [] [] [create] buildconfigs.build.openshift.io/instantiate [] [] [create] buildconfigs.build.openshift.io/instantiatebinary [] [] [create] builds.build.openshift.io/clone [] [] [create] imagestreamimports.image.openshift.io [] [] [create] routes.route.openshift.io/custom-host [] [] [create] podsecuritypolicyreviews.security.openshift.io [] [] [create] podsecuritypolicyselfsubjectreviews.security.openshift.io [] [] [create] podsecuritypolicysubjectreviews.security.openshift.io [] [] [create] jenkins.build.openshift.io [] [] [edit view view admin edit view] builds [] [] [get create delete deletecollection get list patch update watch get list watch] builds.build.openshift.io [] [] [get create delete deletecollection get list patch update watch get list watch] projects [] [] [get delete get delete get patch update] projects.project.openshift.io [] [] [get delete get delete get patch update] namespaces [] [] [get get list watch] pods/attach [] [] [get list watch create delete deletecollection patch update] pods/exec [] [] [get list watch create delete deletecollection patch update] pods/portforward [] [] [get list watch create delete deletecollection patch update] pods/proxy [] [] [get list watch create delete deletecollection patch update] services/proxy [] [] [get list watch create delete deletecollection patch update] routes/status [] [] [get list watch update] routes.route.openshift.io/status [] [] [get list watch update] appliedclusterresourcequotas [] [] [get list watch] bindings [] [] [get list watch] builds/log [] [] [get list watch] deploymentconfigs/log [] [] [get list watch] deploymentconfigs/status [] [] [get list watch] events [] [] [get list watch] imagestreams/status [] [] [get list watch] limitranges [] [] [get list watch] namespaces/status [] [] [get list watch] pods/log [] [] [get list watch] pods/status [] [] [get list watch] replicationcontrollers/status [] [] [get list watch] resourcequotas/status [] [] [get list watch] resourcequotas [] [] [get list watch] resourcequotausages [] [] [get list watch] rolebindingrestrictions [] [] [get list watch] deploymentconfigs.apps.openshift.io/log [] [] [get list watch] deploymentconfigs.apps.openshift.io/status [] [] [get list watch] controllerrevisions.apps [] [] [get list watch] rolebindingrestrictions.authorization.openshift.io [] [] [get list watch] builds.build.openshift.io/log [] [] [get list watch] imagestreams.image.openshift.io/status [] [] [get list watch] appliedclusterresourcequotas.quota.openshift.io [] [] [get list watch] imagestreams/layers [] [] [get update get] imagestreams.image.openshift.io/layers [] [] [get update get] builds/details [] [] [update] builds.build.openshift.io/details [] [] [update] Name: basic-user Labels: <none> Annotations: openshift.io/description: A user that can get basic information about projects. rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- selfsubjectrulesreviews [] [] [create] selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.openshift.io [] [] [create] clusterroles.rbac.authorization.k8s.io [] [] [get list watch] clusterroles [] [] [get list] clusterroles.authorization.openshift.io [] [] [get list] storageclasses.storage.k8s.io [] [] [get list] users [] [~] [get] users.user.openshift.io [] [~] [get] projects [] [] [list watch] projects.project.openshift.io [] [] [list watch] projectrequests [] [] [list] projectrequests.project.openshift.io [] [] [list] Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- *.* [] [] [*] [*] [] [*]", "oc describe clusterrolebinding.rbac", "Name: alertmanager-main Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: alertmanager-main Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount alertmanager-main openshift-monitoring Name: basic-users Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: basic-user Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated Name: cloud-credential-operator-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cloud-credential-operator-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-cloud-credential-operator Name: cluster-admin Labels: kubernetes.io/bootstrapping=rbac-defaults Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:masters Name: cluster-admins Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate: true Role: Kind: ClusterRole Name: cluster-admin Subjects: Kind Name Namespace ---- ---- --------- Group system:cluster-admins User system:admin Name: cluster-api-manager-rolebinding Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: cluster-api-manager-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default openshift-machine-api", "oc describe rolebinding.rbac", "oc describe rolebinding.rbac -n joe-project", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe-project Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe-project Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe-project", "oc adm policy add-role-to-user <role> <user> -n <project>", "oc adm policy add-role-to-user admin alice -n joe", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: admin-0 namespace: joe roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: alice", "oc describe rolebinding.rbac -n <project>", "oc describe rolebinding.rbac -n joe", "Name: admin Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User kube:admin Name: admin-0 Labels: <none> Annotations: <none> Role: Kind: ClusterRole Name: admin Subjects: Kind Name Namespace ---- ---- --------- User alice 1 Name: system:deployers Labels: <none> Annotations: openshift.io/description: Allows deploymentconfigs in this namespace to rollout pods in this namespace. It is auto-managed by a controller; remove subjects to disa Role: Kind: ClusterRole Name: system:deployer Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount deployer joe Name: system:image-builders Labels: <none> Annotations: openshift.io/description: Allows builds in this namespace to push images to this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-builder Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount builder joe Name: system:image-pullers Labels: <none> Annotations: openshift.io/description: Allows all pods in this namespace to pull images from this namespace. It is auto-managed by a controller; remove subjects to disable. Role: Kind: ClusterRole Name: system:image-puller Subjects: Kind Name Namespace ---- ---- --------- Group system:serviceaccounts:joe", "oc create role <name> --verb=<verb> --resource=<resource> -n <project>", "oc create role podview --verb=get --resource=pod -n blue", "oc adm policy add-role-to-user podview user2 --role-namespace=blue -n blue", "oc create clusterrole <name> --verb=<verb> --resource=<resource>", "oc create clusterrole podviewonly --verb=get --resource=pod", "oc adm policy add-cluster-role-to-user cluster-admin <user>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/authentication_and_authorization/using-rbac
Chapter 2. Dependency management
Chapter 2. Dependency management A specific Red Hat build of Apache Camel for Quarkus release is supposed to work only with a specific Quarkus release. 2.1. Quarkus tooling for starting a new project The easiest and most straightforward way to get the dependency versions right in a new project is to use one of the Quarkus tools: https://code.quarkus.io - an online project generator, Quarkus Maven plugin These tools allow you to select extensions and scaffold a new Maven project. Tip The universe of available extensions spans over Quarkus Core, Camel Quarkus and several other third party participating projects, such as Hazelcast, Cassandra, Kogito and OptaPlanner. The generated pom.xml will look similar to the following: <project> ... <properties> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version> <!-- The latest 3.15.x version from https://maven.repository.redhat.com/ga/com/redhat/quarkus/platform/quarkus-bom --> </quarkus.platform.version> ... </properties> <dependencyManagement> <dependencies> <!-- The BOMs managing the dependency versions --> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-camel-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- The extensions you chose in the project generator tool --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-sql</artifactId> <!-- No explicit version required here and below --> </dependency> ... </dependencies> ... </project> Note BOM stands for "Bill of Materials" - it is a pom.xml whose main purpose is to manage the versions of artifacts so that end users importing the BOM in their projects do not need to care which particular versions of the artifacts are supposed to work together. In other words, having a BOM imported in the <depependencyManagement> section of your pom.xml allows you to avoid specifying versions for the dependencies managed by the given BOM. Which particular BOMs end up in the pom.xml file depends on extensions you have selected in the generator tool. The generator tools take care to select a minimal consistent set. If you choose to add an extension at a later point that is not managed by any of the BOMs in your pom.xml file, you do not need to search for the appropriate BOM manually. With the quarkus-maven-plugin you can select the extension, and the tool adds the appropriate BOM as required. You can also use the quarkus-maven-plugin to upgrade the BOM versions. The com.redhat.quarkus.platform BOMs are aligned with each other which means that if an artifact is managed in more than one BOM, it is always managed with the same version. This has the advantage that application developers do not need to care for the compatibility of the individual artifacts that may come from various independent projects. 2.2. Combining with other BOMs When combining camel-quarkus-bom with any other BOM, think carefully in which order you import them, because the order of imports defines the precedence. I.e. if my-foo-bom is imported before camel-quarkus-bom then the versions defined in my-foo-bom will take the precedence. This might or might not be what you want, depending on whether there are any overlaps between my-foo-bom and camel-quarkus-bom and depending on whether those versions with higher precedence work with the rest of the artifacts managed in camel-quarkus-bom .
[ "<project> <properties> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version> <!-- The latest 3.15.x version from https://maven.repository.redhat.com/ga/com/redhat/quarkus/platform/quarkus-bom --> </quarkus.platform.version> </properties> <dependencyManagement> <dependencies> <!-- The BOMs managing the dependency versions --> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-camel-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- The extensions you chose in the project generator tool --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-sql</artifactId> <!-- No explicit version required here and below --> </dependency> </dependencies> </project>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/camel-quarkus-extensions-dependency-management
14.9. Shutting Down, Rebooting, and Forcing Shutdown of a Guest Virtual Machine
14.9. Shutting Down, Rebooting, and Forcing Shutdown of a Guest Virtual Machine This section provides information about shutting down, rebooting, and forcing shutdown of a guest virtual machine. 14.9.1. Shutting Down a Guest Virtual Machine Shut down a guest virtual machine using the virsh shutdown command: You can control the behavior of the rebooting guest virtual machine by modifying the on_shutdown parameter in the guest virtual machine's configuration file.
[ "virsh shutdown {domain-id, domain-name or domain-uuid} [--mode method ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-managing_guest_virtual_machines_with_virsh-shutting_down_rebooting_and_force_shutdown_of_a_guest_virtual_machine
Chapter 10. Hammer cheat sheet
Chapter 10. Hammer cheat sheet Hammer is a command-line tool provided with Red Hat Satellite 6. You can use Hammer to configure and manage a Satellite Server by using either CLI commands or shell script automation. The following cheat sheet provides a condensed overview of essential Hammer commands. 10.1. General information Subcommand Description and tasks --help Display hammer commands and options, append after a subcommand to get more information org The setting is organization-specific, append --organization org_name , or set default organization with: loc The setting is location-specific, append --location loc_name , or set default location with: Note: This cheat sheet assumes saved credentials in ~/.hammer/cli_config.yml . For more information, see Chapter 3, Hammer authentication . 10.2. Organizations, locations, and repositories Subcommand Description and tasks organization Create an organization: List organizations: location See the options for organization subscription org Upload a subscription manifest: repository-set org Enable a repository: repository org Synchronize a repository: Create a custom repository: Upload content to a custom repository: 10.3. Content life cycles Subcommand Description and tasks lifecycle-environment org Create a life cycle environment: List life cycle environments: content-view org Create a content view: Add repositories to a content view: Add Puppet modules to a content view: Publishing a content view: Promoting a content view: Incremental update of a content view: 10.4. Provisioning environments Subcommand Description and tasks domain Create a domain: subnet org loc Add a subnet: compute-resource org loc Create a compute resource: medium Add an installation medium: partition-table Add a partition table: template Add a provisioning template: os Add an operating system: 10.5. Activation keys Subcommand Description and tasks activation-key org Create an activation key: Add a subscription to the activation key: 10.6. Users and permissions Subcommand Description and tasks user org Create a user: Add a role to a user: user-group Create a user group: Add a role to a user group: role Create a role: filter Create a filter and add it to a role: 10.7. Errata Subcommand Description and tasks erratum List errata: Find erratum by CVE: Inspect erratum: host List errata applicable to a host: Apply errata to a host: 10.8. Hosts Subcommand Description and tasks hostgroup org loc Create a host group: Add an activation key to a host group: host org loc Create a host (inheriting parameters from a host group): Remove the host from host group: job-template Add a job template for remote execution: job-invocation Start a remote job: Monitor the remote job: 10.9. Tasks Subcommand Description and tasks task List all tasks:
[ "hammer defaults add --param-name organization_id --param-value org_ID", "hammer defaults add --param-name location_id --param-value loc_ID", "hammer organization create --name org_name", "hammer organization list", "hammer subscription upload --file path", "hammer repository-set enable --product prod_name --basearch base_arch --releasever rel_v --name repo_name", "hammer repository synchronize --product prod_name --name repo_name", "hammer repository create --product prod_name --content-type cont_type --publish-via-http true --url repo_url --name repo_name", "hammer repository upload-content --product prod_name --id repo_id --path path_to_dir", "hammer lifecycle-environment create --name env_name --description env_desc --prior prior_env_name", "hammer lifecycle-environment list", "hammer content-view create --name cv_n --repository-ids repo_ID1,... --description cv_description", "hammer content-view add-repository --name cv_n --repository-id repo_ID", "hammer content-view puppet-module add --content-view cv_n --name module_name", "hammer content-view publish --id cv_ID", "hammer content-view version promote --content-view cv_n --to-lifecycle-environment env_name", "hammer content-view version incremental-update --content-view-version-id cv_ID --packages pkg_n1,... --lifecycle-environment-ids env_ID1,", "hammer domain create --name domain_name", "hammer subnet create --name subnet_name --organization-ids org_ID1,... --location-ids loc_ID1,... --domain-ids dom_ID1,... --boot-mode boot_mode --network network_address --mask netmask --ipam ipam", "hammer compute-resource create --name cr_name --organization-ids org_ID1,... --location-ids loc_ID1,... --provider provider_name", "hammer medium create --name med_name --path path_to_medium", "hammer partition-table create --name tab_name --path path_to_file --os-family os_family", "hammer template create --name tmp_name --file path_to_template", "hammer os create --name os_name --version version_num", "hammer activation-key create --name ak_name --content-view cv_n --lifecycle-environment lc_name", "hammer activation-key add-subscription --id ak_ID --subscription-id sub_ID", "hammer user create --login user_name --mail user_mail --auth-source-id 1 --organization-ids org_ID1,org_ID2,", "hammer user add-role --id user_id --role role_name", "hammer user-group create --name ug_name", "hammer user-group add-role --id ug_id --role role_name", "hammer role create --name role_name", "hammer filter create --role role_name --permission-ids perm_ID1,perm_ID2,", "hammer erratum list", "hammer erratum list --cve CVE", "hammer erratum info --id err_ID", "hammer host errata list --host host_name", "hammer host errata apply --host host_name --errata-ids err_ID1,err_ID2,", "hammer hostgroup create --name hg_name --puppet-environment env_name --architecture arch_name --domain domain_name --subnet subnet_name --puppet-proxy proxy_name --puppet-ca-proxy ca-proxy_name --operatingsystem os_name --partition-table table_name --medium medium_name --organization-ids org_ID1,... --location-ids loc_ID1,", "hammer hostgroup set-parameter --hostgroup \"hg_name\" --name \"kt_activation_keys\" --value key_name", "hammer host create --name host_name --hostgroup hg_name --interface=\"primary=true, mac= mac_addr , ip= ip_addr , provision=true\" --organization-id org_ID --location-id loc_ID --ask-root-password yes", "hammer host update --name host_name --hostgroup NIL", "hammer job-template create --file path --name template_name --provider-type SSH --job-category category_name", "hammer job-invocation create --job-template template_name --inputs key1= value,... --search-query query", "hammer job-invocation output --id job_id --host host_name", "hammer task list Monitor progress of a running task: hammer task progress --id task_ID" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/using_the_hammer_cli_tool/hammer-cheat-sheet
5.8.2. Changing the Default Context
5.8.2. Changing the Default Context As mentioned in Section 5.7, "The file_t and default_t Types" , on file systems that support extended attributes, when a file that lacks an SELinux context on disk is accessed, it is treated as if it had a default context as defined by SELinux policy. In common policies, this default context uses the file_t type. If it is desirable to use a different default context, mount the file system with the defcontext option. The following example mounts a newly-created file system (on /dev/sda2 ) to the newly-created /test/ directory. It assumes that there are no rules in /etc/selinux/targeted/contexts/files/ that define a context for the /test/ directory: In this example: the defcontext option defines that system_u:object_r:samba_share_t:s0 is "the default security context for unlabeled files" [9] . when mounted, the root directory ( /test/ ) of the file system is treated as if it is labeled with the context specified by defcontext (this label is not stored on disk). This affects the labeling for files created under /test/ : new files inherit the samba_share_t type, and these labels are stored on disk. files created under /test/ while the file system was mounted with a defcontext option retain their labels. [9] Morris, James. "Filesystem Labeling in SELinux". Published 1 October 2004. Accessed 14 October 2008: http://www.linuxjournal.com/article/7426 .
[ "~]# mount /dev/sda2 /test/ -o defcontext=\"system_u:object_r:samba_share_t:s0\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-mounting_file_systems-changing_the_default_context
14.4. Retrieving ACLs
14.4. Retrieving ACLs To determine the existing ACLs for a file or directory, use the getfacl command: It returns output similar to the following: If a directory is specified, and it has a default ACL, the default ACL is also displayed such as:
[ "getfacl <filename>", "file: file owner: andrius group: andrius user::rw- user:smoore:r-- group::r-- mask::r-- other::r--", "file: file owner: andrius group: andrius user::rw- user:smoore:r-- group::r-- mask::r-- other::r-- default:user::rwx default:user:andrius:rwx default:group::r-x default:mask::rwx default:other::r-x" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Access_Control_Lists-Retrieving_ACLs
function::tcpmib_local_port
function::tcpmib_local_port Name function::tcpmib_local_port - Get the local port Synopsis Arguments sk pointer to a struct inet_sock Description Returns the sport from a struct inet_sock in host order.
[ "tcpmib_local_port:long(sk:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tcpmib-local-port
Chapter 2. OpenShift CLI (oc)
Chapter 2. OpenShift CLI (oc) 2.1. Getting started with the OpenShift CLI 2.1.1. About the OpenShift CLI With the OpenShift CLI ( oc ), you can create applications and manage OpenShift Container Platform projects from a terminal. The OpenShift CLI is ideal in the following situations: Working directly with project source code Scripting OpenShift Container Platform operations Managing projects while restricted by bandwidth resources and the web console is unavailable 2.1.2. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) either by downloading the binary or by using an RPM. 2.1.2.1. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.1.2.2. Installing the OpenShift CLI by using the web console You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a web console. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . 2.1.2.2.1. Installing the OpenShift CLI on Linux using the web console You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select appropriate oc binary for your Linux platform, and then click Download oc for Linux . Save the file. Unpack the archive. USD tar xvf <file> Move the oc binary to a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.2.2. Installing the OpenShift CLI on Windows using the web console You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select the oc binary for Windows platform, and then click Download oc for Windows for x86_64 . Save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 2.1.2.2.3. Installing the OpenShift CLI on macOS using the web console You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select the oc binary for macOS platform, and then click Download oc for Mac for x86_64 . Note For macOS arm64, click Download oc for Mac for ARM 64 . Save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.3. Installing the OpenShift CLI by using an RPM For Red Hat Enterprise Linux (RHEL), you can install the OpenShift CLI ( oc ) as an RPM if you have an active OpenShift Container Platform subscription on your Red Hat account. Important You must install oc for RHEL 9 by downloading the binary. Installing oc by using an RPM package is not supported on Red Hat Enterprise Linux (RHEL) 9. Prerequisites Must have root or sudo privileges. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Container Platform 4.14. # subscription-manager repos --enable="rhocp-4.14-for-rhel-8-x86_64-rpms" Install the openshift-clients package: # yum install openshift-clients Verification Verify your installation by using an oc command: USD oc <command> 2.1.2.4. Installing the OpenShift CLI by using Homebrew For macOS, you can install the OpenShift CLI ( oc ) by using the Homebrew package manager. Prerequisites You must have Homebrew ( brew ) installed. Procedure Install the openshift-cli package by running the following command: USD brew install openshift-cli Verification Verify your installation by using an oc command: USD oc <command> 2.1.3. Logging in to the OpenShift CLI You can log in to the OpenShift CLI ( oc ) to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. The OpenShift CLI ( oc ) is installed. Note To access a cluster that is accessible only over an HTTP proxy server, you can set the HTTP_PROXY , HTTPS_PROXY and NO_PROXY variables. These environment variables are respected by the oc CLI so that all communication with the cluster goes through the HTTP proxy. Authentication headers are sent only when using HTTPS transport. Procedure Enter the oc login command and pass in a user name: USD oc login -u user1 When prompted, enter the required information: Example output Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started. 1 Enter the OpenShift Container Platform server URL. 2 Enter whether to use insecure connections. 3 Enter the user's password. Note If you are logged in to the web console, you can generate an oc login command that includes your token and server information. You can use the command to log in to the OpenShift Container Platform CLI without the interactive prompts. To generate the command, select Copy login command from the username drop-down menu at the top right of the web console. You can now create a project or issue other commands for managing your cluster. 2.1.4. Logging in to the OpenShift CLI using a web browser You can log in to the OpenShift CLI ( oc ) with the help of a web browser to access and manage your cluster. This allows users to avoid inserting their access token into the command line. Warning Logging in to the CLI through the web browser runs a server on localhost with HTTP, not HTTPS; use with caution on multi-user workstations. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You must have a browser installed. Procedure Enter the oc login command with the --web flag: USD oc login <cluster_url> --web 1 1 Optionally, you can specify the server URL and callback port. For example, oc login <cluster_url> --web --callback-port 8280 localhost:8443 . The web browser opens automatically. If it does not, click the link in the command output. If you do not specify the OpenShift Container Platform server oc tries to open the web console of the cluster specified in the current oc configuration file. If no oc configuration exists, oc prompts interactively for the server URL. Example output Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session. If more than one identity provider is available, select your choice from the options provided. Enter your username and password into the corresponding browser fields. After you are logged in, the browser displays the text access token received successfully; please return to your terminal . Check the CLI for a login confirmation. Example output Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Note The web console defaults to the profile used in the session. To switch between Administrator and Developer profiles, log out of the OpenShift Container Platform web console and clear the cache. You can now create a project or issue other commands for managing your cluster. 2.1.5. Using the OpenShift CLI Review the following sections to learn how to complete common tasks using the CLI. 2.1.5.1. Creating a project Use the oc new-project command to create a new project. USD oc new-project my-project Example output Now using project "my-project" on server "https://openshift.example.com:6443". 2.1.5.2. Creating a new app Use the oc new-app command to create a new application. USD oc new-app https://github.com/sclorg/cakephp-ex Example output --> Found image 40de956 (9 days old) in imagestream "openshift/php" under tag "7.2" for "php" ... Run 'oc status' to view your app. 2.1.5.3. Viewing pods Use the oc get pods command to view the pods for the current project. Note When you run oc inside a pod and do not specify a namespace, the namespace of the pod is used by default. USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none> 2.1.5.4. Viewing pod logs Use the oc logs command to view logs for a particular pod. USD oc logs cakephp-ex-1-deploy Example output --> Scaling cakephp-ex-1 to 1 --> Success 2.1.5.5. Viewing the current project Use the oc project command to view the current project. USD oc project Example output Using project "my-project" on server "https://openshift.example.com:6443". 2.1.5.6. Viewing the status for the current project Use the oc status command to view information about the current project, such as services, deployments, and build configs. USD oc status Example output In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details. 2.1.5.7. Listing supported API resources Use the oc api-resources command to view the list of supported API resources on the server. USD oc api-resources Example output NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap ... 2.1.6. Getting help You can get help with CLI commands and OpenShift Container Platform resources in the following ways: Use oc help to get a list and description of all available CLI commands: Example: Get general help for the CLI USD oc help Example output OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application ... Use the --help flag to get help about a specific CLI command: Example: Get help for the oc create command USD oc create --help Example output Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags] ... Use the oc explain command to view the description and fields for a particular resource: Example: View documentation for the Pod resource USD oc explain pods Example output KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources ... 2.1.7. Logging out of the OpenShift CLI You can log out the OpenShift CLI to end your current session. Use the oc logout command. USD oc logout Example output Logged "user1" out on "https://openshift.example.com" This deletes the saved authentication token from the server and removes it from your configuration file. 2.2. Configuring the OpenShift CLI 2.2.1. Enabling tab completion You can enable tab completion for the Bash or Zsh shells. 2.2.1.1. Enabling tab completion for Bash After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Bash shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. You must have the package bash-completion installed. Procedure Save the Bash completion code to a file: USD oc completion bash > oc_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp oc_bash_completion /etc/bash_completion.d/ You can also save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 2.2.1.2. Enabling tab completion for Zsh After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Zsh shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. Procedure To add tab completion for oc to your .zshrc file, run the following command: USD cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF Tab completion is enabled when you open a new terminal. 2.3. Usage of oc and kubectl commands The Kubernetes command-line interface (CLI), kubectl , can be used to run commands against a Kubernetes cluster. Because OpenShift Container Platform is a certified Kubernetes distribution, you can use the supported kubectl binaries that ship with OpenShift Container Platform , or you can gain extended functionality by using the oc binary. 2.3.1. The oc binary The oc binary offers the same capabilities as the kubectl binary, but it extends to natively support additional OpenShift Container Platform features, including: Full support for OpenShift Container Platform resources Resources such as DeploymentConfig , BuildConfig , Route , ImageStream , and ImageStreamTag objects are specific to OpenShift Container Platform distributions, and build upon standard Kubernetes primitives. Authentication The oc binary offers a built-in login command for authentication and lets you work with projects, which map Kubernetes namespaces to authenticated users. Read Understanding authentication for more information. Additional commands The additional command oc new-app , for example, makes it easier to get new applications started using existing source code or pre-built images. Similarly, the additional command oc new-project makes it easier to start a project that you can switch to as your default. Important If you installed an earlier version of the oc binary, you cannot use it to complete all of the commands in OpenShift Container Platform 4.14 . If you want the latest features, you must download and install the latest version of the oc binary corresponding to your OpenShift Container Platform server version. Non-security API changes will involve, at minimum, two minor releases (4.1 to 4.2 to 4.3, for example) to allow older oc binaries to update. Using new capabilities might require newer oc binaries. A 4.3 server might have additional capabilities that a 4.2 oc binary cannot use and a 4.3 oc binary might have additional capabilities that are unsupported by a 4.2 server. Table 2.1. Compatibility Matrix X.Y ( oc Client) X.Y+N footnote:versionpolicyn[Where N is a number greater than or equal to 1.] ( oc Client) X.Y (Server) X.Y+N footnote:versionpolicyn[] (Server) Fully compatible. oc client might not be able to access server features. oc client might provide options and features that might not be compatible with the accessed server. 2.3.2. The kubectl binary The kubectl binary is provided as a means to support existing workflows and scripts for new OpenShift Container Platform users coming from a standard Kubernetes environment, or for those who prefer to use the kubectl CLI. Existing users of kubectl can continue to use the binary to interact with Kubernetes primitives, with no changes required to the OpenShift Container Platform cluster. You can install the supported kubectl binary by following the steps to Install the OpenShift CLI . The kubectl binary is included in the archive if you download the binary, or is installed when you install the CLI by using an RPM. For more information, see the kubectl documentation . 2.4. Managing CLI profiles A CLI configuration file allows you to configure different profiles, or contexts, for use with the CLI tools overview . A context consists of user authentication an OpenShift Container Platform server information associated with a nickname . 2.4.1. About switches between CLI profiles Contexts allow you to easily switch between multiple users across multiple OpenShift Container Platform servers, or clusters, when using CLI operations. Nicknames make managing CLI configurations easier by providing short-hand references to contexts, user credentials, and cluster details. After a user logs in with the oc CLI for the first time, OpenShift Container Platform creates a ~/.kube/config file if one does not already exist. As more authentication and connection details are provided to the CLI, either automatically during an oc login operation or by manually configuring CLI profiles, the updated information is stored in the configuration file: CLI config file apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k 1 The clusters section defines connection details for OpenShift Container Platform clusters, including the address for their master server. In this example, one cluster is nicknamed openshift1.example.com:8443 and another is nicknamed openshift2.example.com:8443 . 2 This contexts section defines two contexts: one nicknamed alice-project/openshift1.example.com:8443/alice , using the alice-project project, openshift1.example.com:8443 cluster, and alice user, and another nicknamed joe-project/openshift1.example.com:8443/alice , using the joe-project project, openshift1.example.com:8443 cluster and alice user. 3 The current-context parameter shows that the joe-project/openshift1.example.com:8443/alice context is currently in use, allowing the alice user to work in the joe-project project on the openshift1.example.com:8443 cluster. 4 The users section defines user credentials. In this example, the user nickname alice/openshift1.example.com:8443 uses an access token. The CLI can support multiple configuration files which are loaded at runtime and merged together along with any override options specified from the command line. After you are logged in, you can use the oc status or oc project command to verify your current working environment: Verify the current working environment USD oc status Example output oc status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example. List the current project USD oc project Example output Using project "joe-project" from context named "joe-project/openshift1.example.com:8443/alice" on server "https://openshift1.example.com:8443". You can run the oc login command again and supply the required information during the interactive process, to log in using any other combination of user credentials and cluster details. A context is constructed based on the supplied information if one does not already exist. If you are already logged in and want to switch to another project the current user already has access to, use the oc project command and enter the name of the project: USD oc project alice-project Example output Now using project "alice-project" on server "https://openshift1.example.com:8443". At any time, you can use the oc config view command to view your current CLI configuration, as seen in the output. Additional CLI configuration commands are also available for more advanced usage. Note If you have access to administrator credentials but are no longer logged in as the default system user system:admin , you can log back in as this user at any time as long as the credentials are still present in your CLI config file. The following command logs in and switches to the default project: USD oc login -u system:admin -n default 2.4.2. Manual configuration of CLI profiles Note This section covers more advanced usage of CLI configurations. In most situations, you can use the oc login and oc project commands to log in and switch between contexts and projects. If you want to manually configure your CLI config files, you can use the oc config command instead of directly modifying the files. The oc config command includes a number of helpful sub-commands for this purpose: Table 2.2. CLI configuration subcommands Subcommand Usage set-cluster Sets a cluster entry in the CLI config file. If the referenced cluster nickname already exists, the specified information is merged in. USD oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true] set-context Sets a context entry in the CLI config file. If the referenced context nickname already exists, the specified information is merged in. USD oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>] use-context Sets the current context using the specified context nickname. USD oc config use-context <context_nickname> set Sets an individual value in the CLI config file. USD oc config set <property_name> <property_value> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. The <property_value> is the new value being set. unset Unsets individual values in the CLI config file. USD oc config unset <property_name> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. view Displays the merged CLI configuration currently in use. USD oc config view Displays the result of the specified CLI config file. USD oc config view --config=<specific_filename> Example usage Log in as a user that uses an access token. This token is used by the alice user: USD oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 View the cluster entry automatically created: USD oc config view Example output apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 Update the current context to have users log in to the desired namespace: USD oc config set-context `oc config current-context` --namespace=<project_name> Examine the current context, to confirm that the changes are implemented: USD oc whoami -c All subsequent CLI operations uses the new context, unless otherwise specified by overriding CLI options or until the context is switched. 2.4.3. Load and merge rules You can follow these rules, when issuing CLI operations for the loading and merging order for the CLI configuration: CLI config files are retrieved from your workstation, using the following hierarchy and merge rules: If the --config option is set, then only that file is loaded. The flag is set once and no merging takes place. If the USDKUBECONFIG environment variable is set, then it is used. The variable can be a list of paths, and if so the paths are merged together. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list. Otherwise, the ~/.kube/config file is used and no merging takes place. The context to use is determined based on the first match in the following flow: The value of the --context option. The current-context value from the CLI config file. An empty value is allowed at this stage. The user and cluster to use is determined. At this point, you may or may not have a context; they are built based on the first match in the following flow, which is run once for the user and once for the cluster: The value of the --user for user name and --cluster option for cluster name. If the --context option is present, then use the context's value. An empty value is allowed at this stage. The actual cluster information to use is determined. At this point, you may or may not have cluster information. Each piece of the cluster information is built based on the first match in the following flow: The values of any of the following command line options: --server , --api-version --certificate-authority --insecure-skip-tls-verify If cluster information and a value for the attribute is present, then use it. If you do not have a server location, then there is an error. The actual user information to use is determined. Users are built using the same rules as clusters, except that you can only have one authentication technique per user; conflicting techniques cause the operation to fail. Command line options take precedence over config file values. Valid command line options are: --auth-path --client-certificate --client-key --token For any information that is still missing, default values are used and prompts are given for additional information. 2.5. Extending the OpenShift CLI with plugins You can write and install plugins to build on the default oc commands, allowing you to perform new and more complex tasks with the OpenShift Container Platform CLI. 2.5.1. Writing CLI plugins You can write a plugin for the OpenShift Container Platform CLI in any programming language or script that allows you to write command-line commands. Note that you can not use a plugin to overwrite an existing oc command. Procedure This procedure creates a simple Bash plugin that prints a message to the terminal when the oc foo command is issued. Create a file called oc-foo . When naming your plugin file, keep the following in mind: The file must begin with oc- or kubectl- to be recognized as a plugin. The file name determines the command that invokes the plugin. For example, a plugin with the file name oc-foo-bar can be invoked by a command of oc foo bar . You can also use underscores if you want the command to contain dashes. For example, a plugin with the file name oc-foo_bar can be invoked by a command of oc foo-bar . Add the following contents to the file. #!/bin/bash # optional argument handling if [[ "USD1" == "version" ]] then echo "1.0.0" exit 0 fi # optional argument handling if [[ "USD1" == "config" ]] then echo USDKUBECONFIG exit 0 fi echo "I am a plugin named kubectl-foo" After you install this plugin for the OpenShift Container Platform CLI, it can be invoked using the oc foo command. Additional resources Review the Sample plugin repository for an example of a plugin written in Go. Review the CLI runtime repository for a set of utilities to assist in writing plugins in Go. 2.5.2. Installing and using CLI plugins After you write a custom plugin for the OpenShift Container Platform CLI, you must install the plugin before use. Prerequisites You must have the oc CLI tool installed. You must have a CLI plugin file that begins with oc- or kubectl- . Procedure If necessary, update the plugin file to be executable. USD chmod +x <plugin_file> Place the file anywhere in your PATH , such as /usr/local/bin/ . USD sudo mv <plugin_file> /usr/local/bin/. Run oc plugin list to make sure that the plugin is listed. USD oc plugin list Example output The following compatible plugins are available: /usr/local/bin/<plugin_file> If your plugin is not listed here, verify that the file begins with oc- or kubectl- , is executable, and is on your PATH . Invoke the new command or option introduced by the plugin. For example, if you built and installed the kubectl-ns plugin from the Sample plugin repository , you can use the following command to view the current namespace. USD oc ns Note that the command to invoke the plugin depends on the plugin file name. For example, a plugin with the file name of oc-foo-bar is invoked by the oc foo bar command. 2.6. Managing CLI plugins with Krew You can use Krew to install and manage plugins for the OpenShift CLI ( oc ). Important Using Krew to install and manage plugins for the OpenShift CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.6.1. Installing a CLI plugin with Krew You can install a plugin for the OpenShift CLI ( oc ) with Krew. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. Procedure To list all available plugins, run the following command: USD oc krew search To get information about a plugin, run the following command: USD oc krew info <plugin_name> To install a plugin, run the following command: USD oc krew install <plugin_name> To list all plugins that were installed by Krew, run the following command: USD oc krew list 2.6.2. Updating a CLI plugin with Krew You can update a plugin that was installed for the OpenShift CLI ( oc ) with Krew. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. You have installed a plugin for the OpenShift CLI with Krew. Procedure To update a single plugin, run the following command: USD oc krew upgrade <plugin_name> To update all plugins that were installed by Krew, run the following command: USD oc krew upgrade 2.6.3. Uninstalling a CLI plugin with Krew You can uninstall a plugin that was installed for the OpenShift CLI ( oc ) with Krew. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. You have installed a plugin for the OpenShift CLI with Krew. Procedure To uninstall a plugin, run the following command: USD oc krew uninstall <plugin_name> 2.6.4. Additional resources Krew Extending the OpenShift CLI with plugins 2.7. OpenShift CLI developer command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) developer commands. For administrator commands, see the OpenShift CLI administrator command reference . Run oc help to list all commands or run oc <command> --help to get additional details for a specific command. 2.7.1. OpenShift CLI (oc) developer commands 2.7.1.1. oc annotate Update the annotations on a resource Example usage # Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in "pod.json" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description- 2.7.1.2. oc api-resources Print the supported API resources on the server Example usage # Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io 2.7.1.3. oc api-versions Print the supported API versions on the server, in the form of "group/version" Example usage # Print the supported API versions oc api-versions 2.7.1.4. oc apply Apply a configuration to a resource by file name or stdin Example usage # Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' - i.e. expand wildcard characters in file names oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap 2.7.1.5. oc apply edit-last-applied Edit latest last-applied-configuration annotations of a resource/object Example usage # Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json 2.7.1.6. oc apply set-last-applied Set the last-applied-configuration annotation on a live object to match the contents of a file Example usage # Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true 2.7.1.7. oc apply view-last-applied View the latest last-applied-configuration annotations of a resource/object Example usage # View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json 2.7.1.8. oc attach Attach to a running container Example usage # Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx 2.7.1.9. oc auth can-i Check whether an action is allowed Example usage # Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if service account "foo" of namespace "dev" can list pods # in the namespace "prod". # You must be allowed to use impersonation for the global option "--as". oc auth can-i list pods --as=system:serviceaccount:dev:foo -n prod # Check to see if I can do everything in my current namespace ("*" means all) oc auth can-i '*' '*' # Check to see if I can get the job named "bar" in namespace "foo" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace "foo" oc auth can-i --list --namespace=foo 2.7.1.10. oc auth reconcile Reconciles rules for RBAC role, role binding, cluster role, and cluster role binding objects Example usage # Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml 2.7.1.11. oc auth whoami Experimental: Check self subject attributes Example usage # Get your subject attributes. oc auth whoami # Get your subject attributes in JSON format. oc auth whoami -o json 2.7.1.12. oc autoscale Autoscale a deployment config, deployment, replica set, stateful set, or replication controller Example usage # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller "foo", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80 2.7.1.13. oc cancel-build Cancel running, pending, or new builds Example usage # Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new 2.7.1.14. oc cluster-info Display cluster information Example usage # Print the address of the control plane and cluster services oc cluster-info 2.7.1.15. oc cluster-info dump Dump relevant information for debugging and diagnosis Example usage # Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state 2.7.1.16. oc completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Example usage # Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf " # Kubectl shell completion source 'USDHOME/.kube/completion.bash.inc' " >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > "USD{fpath[1]}/_oc" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\.kube\completion.ps1 Add-Content USDPROFILE "USDHOME\.kube\completion.ps1" ## Execute completion code in the profile Add-Content USDPROFILE "if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE 2.7.1.17. oc config current-context Display the current-context Example usage # Display the current-context oc config current-context 2.7.1.18. oc config delete-cluster Delete the specified cluster from the kubeconfig Example usage # Delete the minikube cluster oc config delete-cluster minikube 2.7.1.19. oc config delete-context Delete the specified context from the kubeconfig Example usage # Delete the context for the minikube cluster oc config delete-context minikube 2.7.1.20. oc config delete-user Delete the specified user from the kubeconfig Example usage # Delete the minikube user oc config delete-user minikube 2.7.1.21. oc config get-clusters Display clusters defined in the kubeconfig Example usage # List the clusters that oc knows about oc config get-clusters 2.7.1.22. oc config get-contexts Describe one or many contexts Example usage # List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context 2.7.1.23. oc config get-users Display users defined in the kubeconfig Example usage # List the users that oc knows about oc config get-users 2.7.1.24. oc config new-admin-kubeconfig Generate, make the server trust, and display a new admin.kubeconfig. Example usage # Generate a new admin kubeconfig oc config new-admin-kubeconfig 2.7.1.25. oc config new-kubelet-bootstrap-kubeconfig Generate, make the server trust, and display a new kubelet /etc/kubernetes/kubeconfig. Example usage # Generate a new kubelet bootstrap kubeconfig oc config new-kubelet-bootstrap-kubeconfig 2.7.1.26. oc config refresh-ca-bundle Update the OpenShift CA bundle by contacting the apiserver. Example usage # Refresh the CA bundle for the current context's cluster oc config refresh-ca-bundle # Refresh the CA bundle for the cluster named e2e in your kubeconfig oc config refresh-ca-bundle e2e # Print the CA bundle from the current OpenShift cluster's apiserver. oc config refresh-ca-bundle --dry-run 2.7.1.27. oc config rename-context Rename a context from the kubeconfig file Example usage # Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name 2.7.1.28. oc config set Set an individual value in a kubeconfig file Example usage # Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo "cert_data_here" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true 2.7.1.29. oc config set-cluster Set a cluster entry in kubeconfig Example usage # Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set proxy url for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4 2.7.1.30. oc config set-context Set a context entry in kubeconfig Example usage # Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin 2.7.1.31. oc config set-credentials Set a user entry in kubeconfig Example usage # Set only the "client-key" field on the "cluster-admin" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the "cluster-admin" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the "cluster-admin" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the "cluster-admin" entry with additional args oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the "client-secret" config value for the OpenID Connect auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Define new exec auth plugin args for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=var-to-remove- 2.7.1.32. oc config unset Unset an individual value in a kubeconfig file Example usage # Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace 2.7.1.33. oc config use-context Set the current-context in a kubeconfig file Example usage # Use the context for the minikube cluster oc config use-context minikube 2.7.1.34. oc config view Display merged kubeconfig settings or a specified kubeconfig file Example usage # Show merged kubeconfig settings oc config view # Show merged kubeconfig settings and raw certificate data and exposed secrets oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' 2.7.1.35. oc cp Copy files and directories to and from containers Example usage # !!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar 2.7.1.36. oc create Create a resource from a file or from stdin Example usage # Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json 2.7.1.37. oc create build Create a new build Example usage # Create a new build oc create build myapp 2.7.1.38. oc create clusterresourcequota Create a cluster resource quota Example usage # Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10 2.7.1.39. oc create clusterrole Create a cluster role Example usage # Create a cluster role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named "pod-reader" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named "foo" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named "foo" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name "foo" with NonResourceURL specified oc create clusterrole "foo" --verb=get --non-resource-url=/logs/* # Create a cluster role name "monitoring" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true" 2.7.1.40. oc create clusterrolebinding Create a cluster role binding for a particular cluster role Example usage # Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1 2.7.1.41. oc create configmap Create a config map from a local file, directory or literal value Example usage # Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.7.1.42. oc create cronjob Create a cron job with the specified name Example usage # Create a cron job oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" -- date 2.7.1.43. oc create deployment Create a deployment with the specified name Example usage # Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701 2.7.1.44. oc create deploymentconfig Create a deployment config with default options that uses a given image Example usage # Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx 2.7.1.45. oc create identity Manually create an identity (only needed if automatic creation is disabled) Example usage # Create an identity with identity provider "acme_ldap" and the identity provider username "adamjones" oc create identity acme_ldap:adamjones 2.7.1.46. oc create imagestream Create a new empty image stream Example usage # Create a new image stream oc create imagestream mysql 2.7.1.47. oc create imagestreamtag Create a new image stream tag Example usage # Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0 2.7.1.48. oc create ingress Create an ingress with the specified name Example usage # Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret "my-cert" oc create ingress simple --rule="foo.com/bar=svc1:8080,tls=my-cert" # Create a catch all ingress of "/path" pointing to service svc:port and Ingress Class as "otheringress" oc create ingress catch-all --class=otheringress --rule="/path=svc:port" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule="foo.com/bar=svc:port" \ --annotation ingress.annotation1=foo \ --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default \ --rule="foo.com/=svc:port" \ --rule="foo.com/admin/=svcadmin:portadmin" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default \ --rule="foo.com/path*=svc:8080" \ --rule="bar.com/admin*=svc2:http" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default \ --rule="foo.com/=svc:https,tls" \ --rule="foo.com/path/subpath*=othersvc:8080" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default \ --rule="foo.com/*=svc:8080,tls=secret1" # Create an ingress with a default backend oc create ingress ingdefault --class=default \ --default-backend=defaultsvc:http \ --rule="foo.com/*=svc:8080,tls=secret1" 2.7.1.49. oc create job Create a job with the specified name Example usage # Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named "a-cronjob" oc create job test-job --from=cronjob/a-cronjob 2.7.1.50. oc create namespace Create a namespace with the specified name Example usage # Create a new namespace named my-namespace oc create namespace my-namespace 2.7.1.51. oc create poddisruptionbudget Create a pod disruption budget with the specified name Example usage # Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50% 2.7.1.52. oc create priorityclass Create a priority class with the specified name Example usage # Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description="high priority" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description="default priority" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description="high priority" --preemption-policy="Never" 2.7.1.53. oc create quota Create a quota with the specified name Example usage # Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort 2.7.1.54. oc create role Create a role with single rule Example usage # Create a role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named "pod-reader" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named "foo" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named "foo" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status 2.7.1.55. oc create rolebinding Create a role binding for a particular role or cluster role Example usage # Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1 # Create a role binding for serviceaccount monitoring:sa-dev using the admin role oc create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev 2.7.1.56. oc create route edge Create a route that uses edge TLS termination Example usage # Create an edge route named "my-route" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets 2.7.1.57. oc create route passthrough Create a route that uses passthrough TLS termination Example usage # Create a passthrough route named "my-route" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com 2.7.1.58. oc create route reencrypt Create a route that uses reencrypt TLS termination Example usage # Create a route named "my-route" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend 2.7.1.59. oc create secret docker-registry Create a secret for use with a Docker registry Example usage # If you don't already have a .dockercfg file, you can create a dockercfg secret directly by using: oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json 2.7.1.60. oc create secret generic Create a secret from a local file, directory, or literal value Example usage # Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.7.1.61. oc create secret tls Create a TLS secret Example usage # Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key 2.7.1.62. oc create service clusterip Create a ClusterIP service Example usage # Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip="None" 2.7.1.63. oc create service externalname Create an ExternalName service Example usage # Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com 2.7.1.64. oc create service loadbalancer Create a LoadBalancer service Example usage # Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080 2.7.1.65. oc create service nodeport Create a NodePort service Example usage # Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080 2.7.1.66. oc create serviceaccount Create a service account with the specified name Example usage # Create a new service account named my-service-account oc create serviceaccount my-service-account 2.7.1.67. oc create token Request a service account token Example usage # Request a token to authenticate to the kube-apiserver as the service account "myapp" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific uid oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc 2.7.1.68. oc create user Manually create a user (only needed if automatic creation is disabled) Example usage # Create a user with the username "ajones" and the display name "Adam Jones" oc create user ajones --full-name="Adam Jones" 2.7.1.69. oc create useridentitymapping Manually map an identity to a user Example usage # Map the identity "acme_ldap:adamjones" to the user "ajones" oc create useridentitymapping acme_ldap:adamjones ajones 2.7.1.70. oc debug Launch a new instance of a pod for debugging Example usage # Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns 2.7.1.71. oc delete Delete resources by file names, stdin, resources and names, or by resources and label selector Example usage # Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' - i.e. expand wildcard characters in file names oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names "baz" and "foo" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all 2.7.1.72. oc describe Show details of a specific resource or group of resources Example usage # Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in "pod.json" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe po -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend 2.7.1.73. oc diff Diff the live version against a would-be applied version Example usage # Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f - 2.7.1.74. oc edit Edit a resource on the server Example usage # Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR="nano" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the deployment/mydeployment's status subresource oc edit deployment mydeployment --subresource='status' 2.7.1.75. oc events List events Example usage # List recent events in the default namespace. oc events # List recent events in all namespaces. oc events --all-namespaces # List recent events for the specified pod, then wait for more events and list them as they arrive. oc events --for pod/web-pod-13je7 --watch # List recent events in given format. Supported ones, apart from default, are json and yaml. oc events -oyaml # List recent only events in given event types oc events --types=Warning,Normal 2.7.1.76. oc exec Execute a command in a container Example usage # Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not "ls -t /usr") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date 2.7.1.77. oc explain Get documentation for a resource Example usage # Get the documentation of the resource and its fields oc explain pods # Get the documentation of a specific field of a resource oc explain pods.spec.containers 2.7.1.78. oc expose Expose a replicated application as a service or route Example usage # Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx 2.7.1.79. oc extract Extract secrets or config maps to disk Example usage # Extract the secret "test" to the current directory oc extract secret/test # Extract the config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map "nginx" to STDOUT oc extract configmap/nginx --to=- # Extract only the key "nginx.conf" from config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf 2.7.1.80. oc get Display one or many resources Example usage # List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the "v1" version of the "apps" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in "pod.yaml" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List status subresource for a single pod. oc get pod web-pod-13je7 --subresource status 2.7.1.81. oc idle Idle scalable resources Example usage # Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt 2.7.1.82. oc image append Add layers to images and push them to a registry Example usage # Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{"Entrypoint":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: The first image in the manifest list that matches the filter will be returned when --keep-manifest-list is not specified oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests when keep-manifest-list is specified oc image append --from docker.io/library/busybox:latest --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests that is specified by the filter, while preserving the manifestlist oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz 2.7.1.83. oc image extract Copy files from an image to the file system Example usage # Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract; pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:] 2.7.1.84. oc image info Display information about an image Example usage # Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64 2.7.1.85. oc image mirror Mirror images from one repository to another Example usage # Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable \ docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only # the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=.* # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that the target registry may reject a manifest list if the platform specific images do not all # exist. You must use a registry with sparse registry support enabled. oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=os/arch \ --keep-manifest-list=true 2.7.1.86. oc import-image Import images from a container image registry Example usage # Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Update imported data for a tag that points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm 2.7.1.87. oc kustomize Build a kustomization target from a directory or URL Example usage # Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6 2.7.1.88. oc label Update the labels on a resource Example usage # Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in "pod.json" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar- 2.7.1.89. oc login Log in to a server Example usage # Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass # Log in to the given server through a browser oc login localhost:8443 --web --callback-port 8280 2.7.1.90. oc logout End the current server session Example usage # Log out oc logout 2.7.1.91. oc logs Print the logs for a container in a pod Example usage # Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container 2.7.1.92. oc new-app Create a new application Example usage # List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Use an image with the full manifest list to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match "ruby" oc new-app --search ruby # Search for "ruby", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for "ruby" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml 2.7.1.93. oc new-build Create a new build configuration Example usage # Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config using an image with the full manifest list to create an app and override application artifacts' names oc new-build --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp 2.7.1.94. oc new-project Request a new project Example usage # Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name="Web Team Development" --description="Development project for the web team." 2.7.1.95. oc observe Observe changes to resources and react to them (experimental) Example usage # Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe services -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh 2.7.1.96. oc patch Update fields of a resource Example usage # Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\n unschedulable: true' # Partially update a node identified by the type and name specified in "node.json" using strategic merge patch oc patch -f node.json -p '{"spec":{"unschedulable":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]' # Update a deployment's replicas through the scale subresource using a merge patch. oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":2}}' 2.7.1.97. oc plugin list List all visible plugin executables on a user's PATH Example usage # List all available plugins oc plugin list 2.7.1.98. oc policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1 2.7.1.99. oc policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml 2.7.1.100. oc policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml 2.7.1.101. oc port-forward Forward one or more local ports to a pod Example usage # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000 2.7.1.102. oc process Process a template into list of resources Example usage # Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f - 2.7.1.103. oc project Switch to another project Example usage # Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project 2.7.1.104. oc projects Display existing projects Example usage # List all projects oc projects 2.7.1.105. oc proxy Run a proxy to the Kubernetes API server Example usage # To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api 2.7.1.106. oc registry info Print information about the integrated registry Example usage # Display information about the integrated registry oc registry info 2.7.1.107. oc registry login Log in to the integrated registry Example usage # Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS 2.7.1.108. oc replace Replace a resource by file name or stdin Example usage # Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\(image: myimage\):.*USD/\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json 2.7.1.109. oc rollback Revert part of an application back to a deployment Example usage # Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json 2.7.1.110. oc rollout cancel Cancel the in-progress deployment Example usage # Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx 2.7.1.111. oc rollout history View rollout history Example usage # View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3 2.7.1.112. oc rollout latest Start a new rollout for a deployment config with the latest state from its triggers Example usage # Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json 2.7.1.113. oc rollout pause Mark the provided resource as paused Example usage # Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx 2.7.1.114. oc rollout restart Restart a resource Example usage # Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx 2.7.1.115. oc rollout resume Resume a paused resource Example usage # Resume an already paused deployment oc rollout resume dc/nginx 2.7.1.116. oc rollout retry Retry the latest failed rollout Example usage # Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend 2.7.1.117. oc rollout status Show the status of the rollout Example usage # Watch the status of the latest rollout oc rollout status dc/nginx 2.7.1.118. oc rollout undo Undo a rollout Example usage # Roll back to the deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3 2.7.1.119. oc rsh Start a shell session in a container Example usage # Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/scheduled 2.7.1.120. oc rsync Copy files between a local file system and a pod Example usage # Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir 2.7.1.121. oc run Run a particular image on the cluster Example usage # Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables "DNS_DOMAIN=cluster" and "POD_NAMESPACE=default" in the container oc run hazelcast --image=hazelcast/hazelcast --env="DNS_DOMAIN=cluster" --env="POD_NAMESPACE=default" # Start a hazelcast pod and set labels "app=hazelcast" and "env=prod" in the container oc run hazelcast --image=hazelcast/hazelcast --labels="app=hazelcast,env=prod" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN> 2.7.1.122. oc scale Set a new size for a deployment, replica set, or replication controller Example usage # Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in "foo.yaml" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/foo rc/bar rc/baz # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web 2.7.1.123. oc secrets link Link secrets to a service account Example usage # Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount 2.7.1.124. oc secrets unlink Detach secrets from a service account Example usage # Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name ... 2.7.1.125. oc set build-hook Update a build hook on a build config Example usage # Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script="/var/lib/test-image.sh param1 param2 && /var/lib/done.sh" 2.7.1.126. oc set build-secret Update a build secret on a build config Example usage # Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret 2.7.1.127. oc set data Update the data within a config map or secret Example usage # Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir 2.7.1.128. oc set deployment-hook Update a deployment hook on a deployment config Example usage # Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh 2.7.1.129. oc set env Update environment variables on a pod template Example usage # Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers="c1" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp 2.7.1.130. oc set image Update the image of a pod template Example usage # Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment config's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in YAML format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml 2.7.1.131. oc set image-lookup Change how images are resolved when deploying applications Example usage # Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all 2.7.1.132. oc set probe Update a probe on a pod template Example usage # Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30 2.7.1.133. oc set resources Update resource requests/limits on objects with pod templates Example usage # Set a deployments nginx container CPU limits to "200m and memory to 512Mi" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml 2.7.1.134. oc set route-backends Update the backends for a route Example usage # Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero 2.7.1.135. oc set selector Set the selector on a resource Example usage # Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip="None" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f - 2.7.1.136. oc set serviceaccount Update the service account of a resource Example usage # Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml 2.7.1.137. oc set subject Update the user, group, or service account in a role binding or cluster role binding Example usage # Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml 2.7.1.138. oc set triggers Update the triggers on one or more objects Example usage # Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main 2.7.1.139. oc set volumes Update volumes on a pod template Example usage # List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (PVC) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount "v1" from container "c1" # (and by removing the volume "v1" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string> 2.7.1.140. oc start-build Start a new build Example usage # Starts build from build config "hello-world" oc start-build hello-world # Starts build from a build "hello-world-1" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config "hello-world" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config "hello-world" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait 2.7.1.141. oc status Show an overview of the current project Example usage # See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest 2.7.1.142. oc tag Tag existing images into image streams Example usage # Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Tag an external container image and include the full manifest list oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d 2.7.1.143. oc version Print the client and server version information Example usage # Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context oc version --short # Print the OpenShift client version information for the current context oc version --client 2.7.1.144. oc wait Experimental: Wait for a specific condition on one or many resources Example usage # Wait for the pod "busybox1" to contain the status condition of type "Ready" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity): oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod "busybox1" to contain the status phase to be "Running". oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for the pod "busybox1" to be deleted, with a timeout of 60s, after having issued the "delete" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s 2.7.1.145. oc whoami Return information about the current session Example usage # Display the currently authenticated user oc whoami 2.7.2. Additional resources OpenShift CLI administrator command reference 2.8. OpenShift CLI administrator command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) administrator commands. You must have cluster-admin or equivalent permissions to use these commands. For developer commands, see the OpenShift CLI developer command reference . Run oc adm -h to list all administrator commands or run oc <command> --help to get additional details for a specific command. 2.8.1. OpenShift CLI (oc) administrator commands 2.8.1.1. oc adm build-chain Output the inputs and dependencies of your builds Example usage # Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all 2.8.1.2. oc adm catalog mirror Mirror an operator-registry catalog Example usage # Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageDigestMirrorSet.yaml # Edit the mirroring mappings and mirror with "oc image mirror" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageDigestMirrorSets generated by oc adm catalog mirror oc delete imagedigestmirrorset -l operators.openshift.org/catalog=true 2.8.1.3. oc adm certificate approve Approve a certificate signing request Example usage # Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp 2.8.1.4. oc adm certificate deny Deny a certificate signing request Example usage # Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp 2.8.1.5. oc adm copy-to-node Copies specified files to the node. 2.8.1.6. oc adm cordon Mark node as unschedulable Example usage # Mark node "foo" as unschedulable oc adm cordon foo 2.8.1.7. oc adm create-bootstrap-project-template Create a bootstrap project template Example usage # Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml 2.8.1.8. oc adm create-error-template Create an error page template Example usage # Output a template for the error page to stdout oc adm create-error-template 2.8.1.9. oc adm create-login-template Create a login template Example usage # Output a template for the login page to stdout oc adm create-login-template 2.8.1.10. oc adm create-provider-selection-template Create a provider selection template Example usage # Output a template for the provider selection page to stdout oc adm create-provider-selection-template 2.8.1.11. oc adm drain Drain node in preparation for maintenance Example usage # Drain node "foo", even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900 2.8.1.12. oc adm groups add-users Add users to a group Example usage # Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2 2.8.1.13. oc adm groups new Create a new group Example usage # Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name 2.8.1.14. oc adm groups prune Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm groups prune --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm groups prune --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.8.1.15. oc adm groups remove-users Remove users from a group Example usage # Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2 2.8.1.16. oc adm groups sync Sync OpenShift groups with records from an external provider Example usage # Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in an allowlist file with an LDAP server oc adm groups sync --whitelist=/path/to/allowlist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm 2.8.1.17. oc adm inspect Collect debugging data for a given resource Example usage # Collect debugging data for the "openshift-apiserver" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the "openshift-apiserver" and "kube-apiserver" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions 2.8.1.18. oc adm migrate icsp Update imagecontentsourcepolicy file(s) to imagedigestmirrorset file(s) Example usage # Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir 2.8.1.19. oc adm migrate template-instances Update template instances to point to the latest group-version-kinds Example usage # Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm 2.8.1.20. oc adm must-gather Launch a new instance of a pod for gathering debug information Example usage # Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod directory oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh 2.8.1.21. oc adm new-project Create a new project Example usage # Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east' 2.8.1.22. oc adm node-logs Display and filter node logs Example usage # Show kubelet logs from all masters oc adm node-logs --role master -u kubelet # See what logs are available in masters in /var/log oc adm node-logs --role master --path=/ # Display cron log file from all masters oc adm node-logs --role master --path=cron 2.8.1.23. oc adm ocp-certificates monitor-certificates Watch platform certificates. Example usage # Watch platform certificates. oc adm ocp-certificates monitor-certificates 2.8.1.24. oc adm ocp-certificates regenerate-leaf Regenerate client and serving certificates of an OpenShift cluster 2.8.1.25. oc adm ocp-certificates regenerate-machine-config-server-serving-cert Regenerate the machine config operator certificates in an OpenShift cluster 2.8.1.26. oc adm ocp-certificates regenerate-top-level Regenerate the top level certificates in an OpenShift cluster 2.8.1.27. oc adm ocp-certificates remove-old-trust Remove old CAs from ConfigMaps representing platform trust bundles in an OpenShift cluster Example usage # Remove only CA certificates created before a certain date from all trust bundles oc adm ocp-certificates remove-old-trust configmaps -A --all --created-before 2023-06-05T14:44:06Z 2.8.1.28. oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server Update user-data secrets in an OpenShift cluster to use updated MCO certfs Example usage # Regenerate the MCO certs without modifying user-data secrets oc adm certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm certificates update-ignition-ca-bundle-for-machine-config-server 2.8.1.29. oc adm pod-network isolate-projects Isolate project network Example usage # Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret' 2.8.1.30. oc adm pod-network join-projects Join project network Example usage # Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret' 2.8.1.31. oc adm pod-network make-projects-global Make project network global Example usage # Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share' 2.8.1.32. oc adm policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1 2.8.1.33. oc adm policy add-scc-to-group Add a security context constraint to groups Example usage # Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2 2.8.1.34. oc adm policy add-scc-to-user Add a security context constraint to users or a service account Example usage # Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1 2.8.1.35. oc adm policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml 2.8.1.36. oc adm policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml 2.8.1.37. oc adm prune builds Remove old completed and failed builds Example usage # Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm 2.8.1.38. oc adm prune deployments Remove old completed and failed deployment configs Example usage # Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm 2.8.1.39. oc adm prune groups Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm prune groups --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm prune groups --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.8.1.40. oc adm prune images Remove unreferenced images Example usage # See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure HTTP protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm 2.8.1.41. oc adm reboot-machine-config-pool Initiate reboot of the specified MachineConfigPool. Example usage # Reboot all MachineConfigPools oc adm reboot-machine-config-pool mcp/worker mcp/master # Reboot all MachineConfigPools that inherit from worker. This include all custom MachineConfigPools and infra. oc adm reboot-machine-config-pool mcp/worker # Reboot masters oc adm reboot-machine-config-pool mcp/master 2.8.1.42. oc adm release extract Extract the contents of an update payload to disk Example usage # Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x 2.8.1.43. oc adm release info Display information about a release Example usage # Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.11.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.11.0 4.11.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x 2.8.1.44. oc adm release mirror Mirror a release to a different image registry location Example usage # Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.11.0 --to myregistry.local/openshift/release \ --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.11.0 --to file://openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.11.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 \ --to=registry.example.com/your/repository --apply-release-image-signature 2.8.1.45. oc adm release new Create a new OpenShift release Example usage # Create a release from the latest origin images and push to a DockerHub repository oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 \ -- 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 \ cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 2.8.1.46. oc adm restart-kubelet Restarts kubelet on the specified nodes 2.8.1.47. oc adm taint Update the taints on one or more nodes Example usage # Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label mylabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule 2.8.1.48. oc adm top images Show usage statistics for images Example usage # Show usage statistics for images oc adm top images 2.8.1.49. oc adm top imagestreams Show usage statistics for image streams Example usage # Show usage statistics for image streams oc adm top imagestreams 2.8.1.50. oc adm top node Display resource (CPU/memory) usage of nodes Example usage # Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME 2.8.1.51. oc adm top pod Display resource (CPU/memory) usage of pods Example usage # Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel 2.8.1.52. oc adm uncordon Mark node as schedulable Example usage # Mark node "foo" as schedulable oc adm uncordon foo 2.8.1.53. oc adm upgrade Upgrade a cluster or adjust the upgrade channel Example usage # View the update status and available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true 2.8.1.54. oc adm verify-image-signature Verify the image identity contained in the image signature Example usage # Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 \ --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all 2.8.1.55. oc adm wait-for-node-reboot Wait for nodes to reboot after running oc adm reboot-machine-config-pool Example usage # Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master' oc adm wait-for-node-reboot nodes --all # Wait for masters to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/master' oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master # Wait for masters to complete a specific reboot oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master --reboot-number=4 2.8.1.56. oc adm wait-for-stable-cluster wait for the platform operators to become stable 2.8.2. Additional resources OpenShift CLI developer command reference
[ "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhocp-4.14-for-rhel-8-x86_64-rpms\"", "yum install openshift-clients", "oc <command>", "brew install openshift-cli", "oc <command>", "oc login -u user1", "Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started.", "oc login <cluster_url> --web 1", "Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session.", "Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname>", "oc new-project my-project", "Now using project \"my-project\" on server \"https://openshift.example.com:6443\".", "oc new-app https://github.com/sclorg/cakephp-ex", "--> Found image 40de956 (9 days old) in imagestream \"openshift/php\" under tag \"7.2\" for \"php\" Run 'oc status' to view your app.", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none>", "oc logs cakephp-ex-1-deploy", "--> Scaling cakephp-ex-1 to 1 --> Success", "oc project", "Using project \"my-project\" on server \"https://openshift.example.com:6443\".", "oc status", "In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details.", "oc api-resources", "NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap", "oc help", "OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application", "oc create --help", "Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags]", "oc explain pods", "KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources", "oc logout", "Logged \"user1\" out on \"https://openshift.example.com\"", "oc completion bash > oc_bash_completion", "sudo cp oc_bash_completion /etc/bash_completion.d/", "cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF", "apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k", "oc status", "status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example.", "oc project", "Using project \"joe-project\" from context named \"joe-project/openshift1.example.com:8443/alice\" on server \"https://openshift1.example.com:8443\".", "oc project alice-project", "Now using project \"alice-project\" on server \"https://openshift1.example.com:8443\".", "oc login -u system:admin -n default", "oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true]", "oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>]", "oc config use-context <context_nickname>", "oc config set <property_name> <property_value>", "oc config unset <property_name>", "oc config view", "oc config view --config=<specific_filename>", "oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0", "oc config view", "apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0", "oc config set-context `oc config current-context` --namespace=<project_name>", "oc whoami -c", "#!/bin/bash optional argument handling if [[ \"USD1\" == \"version\" ]] then echo \"1.0.0\" exit 0 fi optional argument handling if [[ \"USD1\" == \"config\" ]] then echo USDKUBECONFIG exit 0 fi echo \"I am a plugin named kubectl-foo\"", "chmod +x <plugin_file>", "sudo mv <plugin_file> /usr/local/bin/.", "oc plugin list", "The following compatible plugins are available: /usr/local/bin/<plugin_file>", "oc ns", "oc krew search", "oc krew info <plugin_name>", "oc krew install <plugin_name>", "oc krew list", "oc krew upgrade <plugin_name>", "oc krew upgrade", "oc krew uninstall <plugin_name>", "Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in \"pod.json\" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description-", "Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io", "Print the supported API versions oc api-versions", "Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' - i.e. expand wildcard characters in file names oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap", "Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json", "Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true", "View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json", "Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx", "Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if service account \"foo\" of namespace \"dev\" can list pods # in the namespace \"prod\". # You must be allowed to use impersonation for the global option \"--as\". oc auth can-i list pods --as=system:serviceaccount:dev:foo -n prod # Check to see if I can do everything in my current namespace (\"*\" means all) oc auth can-i '*' '*' # Check to see if I can get the job named \"bar\" in namespace \"foo\" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace \"foo\" oc auth can-i --list --namespace=foo", "Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml", "Get your subject attributes. oc auth whoami # Get your subject attributes in JSON format. oc auth whoami -o json", "Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80", "Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new", "Print the address of the control plane and cluster services oc cluster-info", "Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf \" # Kubectl shell completion source 'USDHOME/.kube/completion.bash.inc' \" >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > \"USD{fpath[1]}/_oc\" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\\.kube\\completion.ps1 Add-Content USDPROFILE \"USDHOME\\.kube\\completion.ps1\" ## Execute completion code in the profile Add-Content USDPROFILE \"if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }\" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE", "Display the current-context oc config current-context", "Delete the minikube cluster oc config delete-cluster minikube", "Delete the context for the minikube cluster oc config delete-context minikube", "Delete the minikube user oc config delete-user minikube", "List the clusters that oc knows about oc config get-clusters", "List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context", "List the users that oc knows about oc config get-users", "Generate a new admin kubeconfig oc config new-admin-kubeconfig", "Generate a new kubelet bootstrap kubeconfig oc config new-kubelet-bootstrap-kubeconfig", "Refresh the CA bundle for the current context's cluster oc config refresh-ca-bundle # Refresh the CA bundle for the cluster named e2e in your kubeconfig oc config refresh-ca-bundle e2e # Print the CA bundle from the current OpenShift cluster's apiserver. oc config refresh-ca-bundle --dry-run", "Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name", "Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo \"cert_data_here\" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set proxy url for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4", "Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin", "Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the \"cluster-admin\" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the \"cluster-admin\" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional args oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Define new exec auth plugin args for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=var-to-remove-", "Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace", "Use the context for the minikube cluster oc config use-context minikube", "Show merged kubeconfig settings oc config view # Show merged kubeconfig settings and raw certificate data and exposed secrets oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "!!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar", "Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json", "Create a new build oc create build myapp", "Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10", "Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named \"pod-reader\" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named \"foo\" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named \"foo\" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name \"foo\" with NonResourceURL specified oc create clusterrole \"foo\" --verb=get --non-resource-url=/logs/* # Create a cluster role name \"monitoring\" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "Create a cron job oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701", "Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx", "Create an identity with identity provider \"acme_ldap\" and the identity provider username \"adamjones\" oc create identity acme_ldap:adamjones", "Create a new image stream oc create imagestream mysql", "Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0", "Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a tls secret \"my-cert\" oc create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\" # Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\" oc create ingress catch-all --class=otheringress --rule=\"/path=svc:port\" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" --annotation ingress.annotation1=foo --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default --rule=\"foo.com/=svc:port\" --rule=\"foo.com/admin/=svcadmin:portadmin\" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default --rule=\"foo.com/path*=svc:8080\" --rule=\"bar.com/admin*=svc2:http\" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default --rule=\"foo.com/=svc:https,tls\" --rule=\"foo.com/path/subpath*=othersvc:8080\" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default --rule=\"foo.com/*=svc:8080,tls=secret1\" # Create an ingress with a default backend oc create ingress ingdefault --class=default --default-backend=defaultsvc:http --rule=\"foo.com/*=svc:8080,tls=secret1\"", "Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named \"a-cronjob\" oc create job test-job --from=cronjob/a-cronjob", "Create a new namespace named my-namespace oc create namespace my-namespace", "Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50%", "Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort", "Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named \"pod-reader\" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named \"foo\" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named \"foo\" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status", "Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1 # Create a role binding for serviceaccount monitoring:sa-dev using the admin role oc create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev", "Create an edge route named \"my-route\" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets", "Create a passthrough route named \"my-route\" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com", "Create a route named \"my-route\" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend", "If you don't already have a .dockercfg file, you can create a dockercfg secret directly by using: oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json", "Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key", "Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip=\"None\"", "Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com", "Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080", "Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080", "Create a new service account named my-service-account oc create serviceaccount my-service-account", "Request a token to authenticate to the kube-apiserver as the service account \"myapp\" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific uid oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc", "Create a user with the username \"ajones\" and the display name \"Adam Jones\" oc create user ajones --full-name=\"Adam Jones\"", "Map the identity \"acme_ldap:adamjones\" to the user \"ajones\" oc create useridentitymapping acme_ldap:adamjones ajones", "Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns", "Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' - i.e. expand wildcard characters in file names oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names \"baz\" and \"foo\" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all", "Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in \"pod.json\" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe po -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend", "Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f -", "Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR=\"nano\" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the deployment/mydeployment's status subresource oc edit deployment mydeployment --subresource='status'", "List recent events in the default namespace. oc events # List recent events in all namespaces. oc events --all-namespaces # List recent events for the specified pod, then wait for more events and list them as they arrive. oc events --for pod/web-pod-13je7 --watch # List recent events in given format. Supported ones, apart from default, are json and yaml. oc events -oyaml # List recent only events in given event types oc events --types=Warning,Normal", "Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date", "Get the documentation of the resource and its fields oc explain pods # Get the documentation of a specific field of a resource oc explain pods.spec.containers", "Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx", "Extract the secret \"test\" to the current directory oc extract secret/test # Extract the config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map \"nginx\" to STDOUT oc extract configmap/nginx --to=- # Extract only the key \"nginx.conf\" from config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf", "List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List status subresource for a single pod. oc get pod web-pod-13je7 --subresource status", "Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt", "Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{\"Entrypoint\":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: The first image in the manifest list that matches the filter will be returned when --keep-manifest-list is not specified oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests when keep-manifest-list is specified oc image append --from docker.io/library/busybox:latest --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests that is specified by the filter, while preserving the manifestlist oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz", "Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract; pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:]", "Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64", "Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only # the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=.* # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that the target registry may reject a manifest list if the platform specific images do not all # exist. You must use a registry with sparse registry support enabled. oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=os/arch --keep-manifest-list=true", "Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Update imported data for a tag that points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm", "Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in \"pod.json\" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar-", "Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass # Log in to the given server through a browser oc login localhost:8443 --web --callback-port 8280", "Log out oc logout", "Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container", "List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Use an image with the full manifest list to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match \"ruby\" oc new-app --search ruby # Search for \"ruby\", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for \"ruby\" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml", "Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config using an image with the full manifest list to create an app and override application artifacts' names oc new-build --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp", "Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name=\"Web Team Development\" --description=\"Development project for the web team.\"", "Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe services -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh", "Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\\n unschedulable: true' # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch oc patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]' # Update a deployment's replicas through the scale subresource using a merge patch. oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'", "List all available plugins oc plugin list", "Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1", "Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml", "Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml", "Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000", "Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f -", "Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project", "List all projects oc projects", "To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api", "Display information about the integrated registry oc registry info", "Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS", "Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*USD/\\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json", "Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json", "Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx", "View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3", "Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json", "Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx", "Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx", "Resume an already paused deployment oc rollout resume dc/nginx", "Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend", "Watch the status of the latest rollout oc rollout status dc/nginx", "Roll back to the previous deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3", "Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/scheduled", "Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir", "Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container oc run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\" # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container oc run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>", "Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in \"foo.yaml\" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/foo rc/bar rc/baz # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web", "Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount", "Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name", "Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script=\"/var/lib/test-image.sh param1 param2 && /var/lib/done.sh\"", "Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret", "Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir", "Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh", "Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers=\"c1\" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp", "Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment config's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in YAML format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all", "Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30", "Set a deployments nginx container CPU limits to \"200m and memory to 512Mi\" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero", "Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f -", "Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml", "Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml", "Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main", "List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (PVC) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount \"v1\" from container \"c1\" # (and by removing the volume \"v1\" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string>", "Starts build from build config \"hello-world\" oc start-build hello-world # Starts build from a previous build \"hello-world-1\" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config \"hello-world\" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config \"hello-world\" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait", "See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest", "Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Tag an external container image and include the full manifest list oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d", "Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context oc version --short # Print the OpenShift client version information for the current context oc version --client", "Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity): oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod \"busybox1\" to contain the status phase to be \"Running\". oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s", "Display the currently authenticated user oc whoami", "Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all", "Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageDigestMirrorSet.yaml # Edit the mirroring mappings and mirror with \"oc image mirror\" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageDigestMirrorSets generated by oc adm catalog mirror oc delete imagedigestmirrorset -l operators.openshift.org/catalog=true", "Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp", "Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp", "Mark node \"foo\" as unschedulable oc adm cordon foo", "Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml", "Output a template for the error page to stdout oc adm create-error-template", "Output a template for the login page to stdout oc adm create-login-template", "Output a template for the provider selection page to stdout oc adm create-provider-selection-template", "Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900", "Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2", "Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name", "Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm groups prune --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm groups prune --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm", "Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2", "Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in an allowlist file with an LDAP server oc adm groups sync --whitelist=/path/to/allowlist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm", "Collect debugging data for the \"openshift-apiserver\" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the \"openshift-apiserver\" and \"kube-apiserver\" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions", "Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir", "Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm", "Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod directory oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh", "Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east'", "Show kubelet logs from all masters oc adm node-logs --role master -u kubelet # See what logs are available in masters in /var/log oc adm node-logs --role master --path=/ # Display cron log file from all masters oc adm node-logs --role master --path=cron", "Watch platform certificates. oc adm ocp-certificates monitor-certificates", "Remove only CA certificates created before a certain date from all trust bundles oc adm ocp-certificates remove-old-trust configmaps -A --all --created-before 2023-06-05T14:44:06Z", "Regenerate the MCO certs without modifying user-data secrets oc adm certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm certificates update-ignition-ca-bundle-for-machine-config-server", "Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret'", "Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret'", "Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share'", "Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1", "Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2", "Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1", "Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml", "Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml", "Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm", "Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm", "Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm prune groups --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm prune groups --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm", "See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure HTTP protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm", "Reboot all MachineConfigPools oc adm reboot-machine-config-pool mcp/worker mcp/master # Reboot all MachineConfigPools that inherit from worker. This include all custom MachineConfigPools and infra. oc adm reboot-machine-config-pool mcp/worker # Reboot masters oc adm reboot-machine-config-pool mcp/master", "Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x", "Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.11.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.11.0 4.11.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x", "Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.11.0 --to myregistry.local/openshift/release --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.11.0 --to file://openshift/release --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.11.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release --release-image-signature-to-dir /tmp/releases # Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 --to=registry.example.com/your/repository --apply-release-image-signature", "Create a release from the latest origin images and push to a DockerHub repository oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a previous release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 --previous 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11", "Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label mylabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule", "Show usage statistics for images oc adm top images", "Show usage statistics for image streams oc adm top imagestreams", "Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME", "Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel", "Mark node \"foo\" as schedulable oc adm uncordon foo", "View the update status and available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true", "Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all", "Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master' oc adm wait-for-node-reboot nodes --all # Wait for masters to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/master' oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master # Wait for masters to complete a specific reboot oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master --reboot-number=4" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/cli_tools/openshift-cli-oc
probe::sunrpc.sched.delay
probe::sunrpc.sched.delay Name probe::sunrpc.sched.delay - Delay an RPC task Synopsis sunrpc.sched.delay Values prog the program number in the RPC call xid the transmission id in the RPC call delay the time delayed vers the program version in the RPC call tk_flags the flags of the task tk_pid the debugging id of the task prot the IP protocol in the RPC call
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sunrpc-sched-delay
Chapter 4. Configuration
Chapter 4. Configuration The following options can be used in an application-properties file to configure your Spring Boot application. 4.1. Connection options These options determine how AMQ Spring Boot Starter establishes connections to remote AMQP peers. The starter uses AMQ JMS to communicate over the network. For more information, see Using the AMQ JMS Client . amqphub.amqp10jms.remoteUrl The connection URI that the AMQ JMS client uses to establish new connections. Connection URI format For more information, see Connection URIs in Using the AMQ JMS Client. amqphub.amqp10jms.username The username used to authenticate the connection. amqphub.amqp10jms.password The password used to authenticate the connection. amqphub.amqp10jms.clientId The client ID applied to the connection. amqphub.amqp10jms.receiveLocalOnly If enabled, calls to receive with a timeout argument check the consumer's local message buffer only. Otherwise, the remote peer is checked as well to ensure there are no messages available. It is disabled by default. amqphub.amqp10jms.receiveNoWaitLocalOnly If enabled, calls to receiveNoWait check the consumer's local message buffer only. Otherwise, the remote peer is checked as well to ensure there are no messages available. It is disabled by default. 4.2. Pooling options These options determine how AMQ Spring Boot Starter caches JMS connections and sessions. The starter uses AMQ JMS Pool for its pooling. For more information, see Using the AMQ JMS Pool Library . amqphub.amqp10jms.pool.enabled Controls whether pooling is enabled. It is disabled by default. amqphub.amqp10jms.pool.maxConnections The maximum number of connections for a single pool. The default is 1. amqphub.amqp10jms.pool.maxSessionsPerConnection The maximum number of sessions for each connection. The default is 500. A negative value removes any limit. If the limit is exceeded, createSession() either blocks or throws an exception, depending on configuration. amqphub.amqp10jms.pool.blockIfSessionPoolIsFull If enabled, calls to createSession() block until a session becomes available in the pool. It is enabled by default. If disabled, calls to createSession() throw an IllegalStateException if no session is available. amqphub.amqp10jms.pool.blockIfSessionPoolIsFullTimeout The time in milliseconds before a blocked call to createSession() throws an IllegalStateException . The default is -1, meaning the call blocks forever. amqphub.amqp10jms.pool.connectionIdleTimeout The time in milliseconds before a connection not currently on loan can be evicted from the pool. The default is 30 seconds. A value of 0 disables the timeout. amqphub.amqp10jms.pool.connectionCheckInterval The time in milliseconds between periodic checks for expired connections. The default is 0, meaning the check is disabled. amqphub.amqp10jms.pool.useAnonymousProducers If enabled, use a single anonymous JMS MessageProducer for all calls to createProducer() . It is enabled by default. In rare cases, this behavior is undesirable. If disabled, every call to createProducer() results in a new MessageProducer instance. amqphub.amqp10jms.pool.explicitProducerCacheSize When not using anonymous producers, the JMS Session can be configured to cache a certain number of MessageProducer objects with explicit destinations. As new producers are created that do not match the cached producers, the oldest entry in the cache is evicted. amqphub.amqp10jms.pool.useProviderJMSContext If enabled, use the JMSContext classes of the underlying JMS provider. It is disabled by default. In normal operation, the pool uses its own generic JMSContext implementation to wrap connections from the pool instead of using the provider implementation. The generic implementation might have limitations the provider implementation does not. However, when enabled, connections from the JMSContext API are not managed by the pool.
[ "amqp[s]://host:port[?option=value[&option2=value...]]" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_spring_boot_starter/configuration
Part IV. Creating C or C++ Applications
Part IV. Creating C or C++ Applications Red Hat offers multiple tools for creating applications using the C and C++ languages. This part of the book lists some of the most common development tasks.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/creating_c_or_c_applications
Logging
Logging OpenShift Container Platform 4.14 Configuring and using logging in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/logging/index
Chapter 7. Migrating JBoss EAP 7.3 Configurations to JBoss EAP 7.4
Chapter 7. Migrating JBoss EAP 7.3 Configurations to JBoss EAP 7.4 7.1. Migrating a JBoss EAP 7.3 Standalone Server to JBoss EAP 7.4 By default, the JBoss Server Migration Tool performs the following tasks when migrating a standalone server configuration from JBoss EAP 7.3 to JBoss EAP 7.4. 7.1.1. Remove Unsupported Subsystems The JBoss Server Migration Tool removes all unsupported subsystem configurations and extensions from migrated server configurations. The tool logs each subsystem and extension to its log file and to the console as it is removed. NOTE Any subsystem that was not supported in JBoss EAP 7.3, but was added by an administrator to that server, is also not supported in JBoss EAP 7.4 and will be removed. To skip removal of the unsupported subsystems, set the subsystems.remove-unsupported-subsystems.skip environment property to true . You can override the default behavior of the JBoss Server Migration Tool and specify which subsystems and extensions should be included or excluded during the migration using the following environment properties. Property Name Property Description extensions.excludes A list of module names of extensions that should never be migrated, for example, com.example.extension1,com.example.extension3 . extensions.includes A list of module names of extensions that should always be migrated, for example, com.example.extension2,com.example.extension4 . subsystems.excludes A list of subsystem namespaces, stripped of the version, that should never be migrated, for example, urn:jboss:domain:logging, urn:jboss:domain:ejb3 . subsystems.includes A list of subsystem namespaces, stripped of the version, that should always be migrated, for example, urn:jboss:domain:security, urn:jboss:domain:ee . 7.1.2. Migrate Referenced Modules A configuration that is migrated from a source server to a target server might reference or depend on a module that is not installed on the target server. The JBoss Server Migration Tool detects this and automatically migrates the referenced modules, plus their dependent modules, from the source server to the target server. A module referenced by a standalone server configuration is migrated using the following process. A module referenced by a security realm configuration is migrated as a plug-in module. A module referenced by the datasource subsystem configuration is migrated as a datasource driver module. A module referenced by the ee subsystem configuration is migrated as a global module. A module referenced by the naming subsystem configuration is migrated as an object factory module. A module referenced by the messaging subsystem configuration is migrated as a Jakarta Messaging bridge module. A module referenced by a vault configuration is migrated to the new configuration. Any extension that is not installed on the target configuration is migrated to the target server configuration. The console logs a message noting the module ID for any module that is migrated. It is possible to exclude the migration of specific modules by specifying the module ID in the modules.excludes environment property. See Configuring the Migration of Modules for more information. 7.1.3. Migrate Referenced Paths A configuration that is migrated from a source server to a target server might reference or depend on file paths and directories that must also be migrated to the target server. The JBoss Server Migration Tool does not migrate absolute path references. It only migrates files or directories that are configured as relative to the source configuration. The console logs a message noting each path that is migrated. The JBoss Server Migration Tool automatically migrates the following path references: Vault keystore and encrypted file's directory. To skip the migration of referenced paths, set the paths.migrate-paths-requested-by-configuration.vault.skip environment property to true . 7.1.4. Add the health subsystem The JBoss EAP 7.4 health subsystem provides support for a server's health functionality. The JBoss Server Migration Tool automatically adds the default health subsystem configuration to the migrated configuration file. To skip the addition of the health subsystem configuration, set the subsystem.health.add.skip environment property to true . After you add the health subsystem to JBoss EAP 7.4, you'll see the following message in your web console: 7.1.5. Add the metrics subsystem The JBoss EAP 7.4 metrics subsystem provides support for a server's metric functionality. The JBoss Server Migration Tool automatically adds the default metrics subsystem configuration to the migrated configuration file. To skip the addition of the metrics subsystem configuration, set the subsystem.metrics.add.skip environment property to true . After you add the metrics subsystem to JBoss EAP 7.4, you'll see the following message in your web console: 7.1.6. Migrate Compatible Security Realms Because the JBoss EAP 7.4 security realm configurations are fully compatible with the JBoss EAP 7.3 security realm configurations, they require no update by the JBoss Server Migration Tool. However, if the application-users.properties , application-roles.properties , mgmt-users.properties , and mgmt-groups.properties files are not referenced using an absolute path, the tool copies them to the path expected by the migrated configuration file. To skip the security realms migration, set the security-realms.migrate-properties.skip environment property to true . 7.1.7. Migrate Deployments The JBoss Server Migration Tool can migrate the following types of standalone server deployment configurations. Deployments it references, also known as persistent deployments . Deployments found in directories monitored by its deployment scanners . Deployment overlays it references. The migration of a deployment consists of installing related file resources on the target server, and possibly updating the migrated configuration. The JBoss Server Migration Tool is preconfigured to skip deployments by default when running in non-interactive mode. To enable migration of deployments, set the deployments.migrate-deployments.skip environment property to false . Important Be aware that when you run the JBoss Server Migration Tool in interactive mode and enter invalid input, the resulting behavior depends on the value of the deployments.migrate-deployments environment property. If deployments.migrate-deployments.skip is set to false and you enter invalid input, the tool will try to migrate the deployments. If deployments.migrate-deployments.skip is set to true and you enter invalid input, the tool will skip the deployments migration. To enable the migration of specific types of deployments, see the following sections. Warning The JBoss Server Migration Tool does not determine whether deployed resources are compatible with the target server. This means that applications or resources might not deploy, might not work as expected, or might not work at all. Also be aware that artifacts such as JBoss EAP 7.3 *-jms.xml configuration files are copied without modification and can cause the JBoss EAP server to boot with errors. Red Hat recommends that you use the Migration Toolkit for Applications (MTA) to analyze deployments to determine compatibility among different JBoss EAP servers. For more information, see Product Documentation for Migration Toolkit for Applications . 7.1.7.1. Migrate Persistent Deployments To enable migration of persistent deployments when running in non-interactive mode, set the deployments.migrate-persistent-deployments.skip environment property to false . The JBoss Server Migration Tool searches for any persistent deployment references and lists them to the console. The processing workflow then depends on whether you are running the tool in interactive mode or in non-interactive mode , as described below. Migrating Persistent Deployments in Non-interactive Mode If you run the tool in non-interactive mode, the tool uses the preconfigured properties to determine whether to migrate the persistent deployments. Persistent deployments are migrated only if both the deployments.migrate-deployments.skip and deployments.migrate-persistent-deployments.skip properties are set to false . Migrating Persistent Deployments in Interactive Mode If you run the tool in interactive mode, the JBoss Server Migration Tool prompts you for each deployment using the following workflow. After printing the persistent deployments it finds to the console, you see the following prompt. Respond with yes to skip migration of persistent deployments. All deployment references are removed from the migrated configuration and you end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you see the following prompt. Respond with yes to automatically migrate all deployments and end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you receive a prompt asking to confirm the migration for each referenced deployment. Respond with yes to migrate the deployment. Respond with no to remove the deployment from the migrated configuration. 7.1.7.2. Migrate Deployment Scanner Deployments Deployment scanners, which are only used in standalone server configurations, monitor a directory for new files and manage their deployment automatically or through special deployment marker files. To enable migration of deployments that are located in directories watched by a deployment scanner when running in non-interactive mode, set the deployments.migrate-deployment-scanner-deployments.skip environment property to false . When migrating a standalone server configuration, the JBoss Server Migration Tool first searches for any configured deployment scanners. For each scanner found, it searches its monitored directories for deployments marked as deployed and prints the results to the console. The processing workflow then depends on whether you are running the tool in interactive mode or in non-interactive mode , as described below. Migrating Deployment Scanner Deployments in Non-interactive Mode If you run the tool in non-interactive mode, the tool uses the preconfigured properties to determine whether to migrate the deployment scanner deployments. Deployment scanner deployments are migrated only if both the deployments.migrate-deployments.skip and deployments.migrate-deployment-scanner-deployments.skip properties are set to false . Migrating Deployment Scanner Deployments in Interactive Mode If you run the tool in interactive mode, the JBoss Server Migration Tool prompts you for each deployment using the following workflow. After printing the deployment scanner deployments it finds to the console, you see the following prompt. Respond with yes to skip migration of deployment scanner deployments. All deployment references are removed from the migrated configuration and you end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you see the following prompt. Respond with yes to automatically migrate all deployments and end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you receive a prompt asking to confirm the migration for each referenced deployment. Respond with yes to migrate the deployment. Respond with no to remove the deployment from the migrated configuration. 7.1.7.3. Migrate Deployment Overlays The migration of deployment overlays is a fully automated process. If you have enabled migration of deployments by setting the deployments.migrate-deployments.skip environment property to false , the JBoss Server Migration Tool searches for deployment overlays referenced in the standalone server configuration that are linked to migrated deployments. It automatically migrates those that are found, removes those that are not referenced, and logs the results to its log file and to the console. 7.2. Migrating a JBoss EAP 7.3 managed domain to JBoss EAP 7.4 Warning When you use the JBoss Server Migration Tool, migrate your domain controller before you migrate your hosts to ensure your domain controller must use the later version of EAP when compared to the version used by hosts. For example, a domain controller running on EAP 7.3 cannot handle a host running on EAP 7.4. For more information and to learn about the supported configurations, see Managing Multiple JBoss EAP Versions in the Configuration Guide for JBoss EAP. By default, the JBoss Server Migration Tool performs the following tasks when migrating a managed domain configuration from JBoss EAP 7.3 to JBoss EAP 7.4 7.2.1. Remove Unsupported Subsystems The JBoss Server Migration Tool removes all unsupported subsystem configurations and extensions from migrated server configurations. The tool logs each subsystem and extension to its log file and to the console as it is removed. NOTE Any subsystem that was not supported in JBoss EAP 7.3, but was added by an administrator to that server, is also not supported in JBoss EAP 7.4 and will be removed. To skip removal of the unsupported subsystems, set the subsystems.remove-unsupported-subsystems.skip environment property to true . You can override the default behavior of the JBoss Server Migration Tool and specify which subsystems and extensions should be included or excluded during the migration using the following environment properties. Property Name Property Description extensions.excludes A list of module names of extensions that should never be migrated, for example, com.example.extension1,com.example.extension3 . extensions.includes A list of module names of extensions that should always be migrated, for example, com.example.extension2,com.example.extension4 . subsystems.excludes A list of subsystem namespaces, stripped of the version, that should never be migrated, for example, urn:jboss:domain:logging, urn:jboss:domain:ejb3 . subsystems.includes A list of subsystem namespaces, stripped of the version, that should always be migrated, for example, urn:jboss:domain:security, urn:jboss:domain:ee . 7.2.2. Migrate Referenced Modules A configuration that is migrated from a source server to a target server might reference or depend on a module that is not installed on the target server. The JBoss Server Migration Tool detects this and automatically migrates the referenced modules, plus their dependent modules, from the source server to the target server. A module referenced by a managed domain configuration is migrated using the following process. A module referenced by a security realm configuration is migrated as a plug-in module. A module referenced by the datasource subsystem configuration is migrated as a datasource driver module. A module referenced by the ee subsystem configuration is migrated as a global module. A module referenced by the naming subsystem configuration is migrated as an object factory module. A module referenced by the messaging subsystem configuration is migrated as a Jakarta Messaging bridge module. A module referenced by a vault configuration is migrated to the new configuration. Any extension that is not installed on the target configuration is migrated to the target server configuration. The console logs a message noting the module ID for any module that is migrated. It is possible to exclude the migration of specific modules by specifying the module ID in the modules.excludes environment property. See Configuring the Migration of Modules for more information. 7.2.3. Migrate Referenced Paths A configuration that is migrated from a source server to a target server might reference or depend on file paths and directories that must also be migrated to the target server. The JBoss Server Migration Tool does not migrate absolute path references. It only migrates files or directories that are configured as relative to the source configuration. The console logs a message noting each path that is migrated. The JBoss Server Migration Tool automatically migrates the following path references: Vault keystore and encrypted file's directory. To skip the migration of referenced paths, set the paths.migrate-paths-requested-by-configuration.vault.skip environment property to true . 7.2.4. Add Host Excludes The JBoss EAP 7.4 domain controller can potentially include functionality that is not supported by hosts running on older versions of the server. The host-exclude configuration specifies the resources that should be hidden from those older versions. When migrating a domain controller configuration, the JBoss Server Migration Tool adds to or replaces the source server's host-exclude configuration with the configuration of the target JBoss EAP 7.4 server. The JBoss Server Migration Tool automatically updates the host-exclude configuration and logs the results to its log file and to the console. 7.2.5. Migrate Deployments The JBoss Server Migration Tool can migrate the following types of managed domain deployment configurations. Deployments it references, also known as persistent deployments . Deployment overlays it references. The migration of a deployment consists of installing related file resources on the target server, and possibly updating the migrated configuration. The JBoss Server Migration Tool is preconfigured to skip deployments by default when running in non-interactive mode. To enable migration of deployments, set the deployments.migrate-deployments.skip environment property to false . Important Be aware that when you run the JBoss Server Migration Tool in interactive mode and enter invalid input, the resulting behavior depends on the value of the deployments.migrate-deployments environment property. If deployments.migrate-deployments.skip is set to false and you enter invalid input, the tool will try to migrate the deployments. If deployments.migrate-deployments.skip is set to true and you enter invalid input, the tool will skip the deployments migration. To enable the migration of specific types of deployments, see the following sections. Warning The JBoss Server Migration Tool does not determine whether deployed resources are compatible with the target server. This means that applications or resources might not deploy, might not work as expected, or might not work at all. Also be aware that artifacts such as JBoss EAP 7.3 *-jms.xml configuration files are copied without modification and can cause the JBoss EAP server to boot with errors. Red Hat recommends that you use the Migration Toolkit for Applications (MTA) to analyze deployments to determine compatibility among different JBoss EAP servers. For more information, see Product Documentation for Migration Toolkit for Applications . 7.2.5.1. Migrate Persistent Deployments To enable migration of persistent deployments when running in non-interactive mode, set the deployments.migrate-persistent-deployments.skip environment property to false . The JBoss Server Migration Tool searches for any persistent deployment references and lists them to the console. The processing workflow then depends on whether you are running the tool in interactive mode or in non-interactive mode , as described below. Migrating Persistent Deployments in Non-interactive Mode If you run the tool in non-interactive mode, the tool uses the preconfigured properties to determine whether to migrate the persistent deployments. Persistent deployments are migrated only if both the deployments.migrate-deployments.skip and deployments.migrate-persistent-deployments.skip properties are set to false . Migrating Persistent Deployments in Interactive Mode If you run the tool in interactive mode, the JBoss Server Migration Tool prompts you for each deployment using the following workflow. After printing the persistent deployments it finds to the console, you see the following prompt. Respond with yes to skip migration of persistent deployments. All deployment references are removed from the migrated configuration and you end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you see the following prompt. Respond with yes to automatically migrate all deployments and end this part of the migration process. Respond with no to continue with the migration. If you choose to continue, you receive a prompt asking to confirm the migration for each referenced deployment. Respond with yes to migrate the deployment. Respond with no to remove the deployment from the migrated configuration. 7.2.5.2. Migrate Deployment Overlays The migration of deployment overlays is a fully automated process. If you have enabled migration of deployments by setting the deployments.migrate-deployments.skip environment property to false , the JBoss Server Migration Tool searches for deployment overlays referenced in the standalone server configuration that are linked to migrated deployments. It automatically migrates those that are found, removes those that are not referenced, and logs the results to its log file and to the console. 7.3. Migrating a JBoss EAP 7.3 Host Configuration to JBoss EAP 7.4 By default, the JBoss Server Migration Tool performs the following tasks when migrating a host server configuration from JBoss EAP 6.4 to JBoss EAP 7.4. 7.3.1. Migrate Referenced Modules A configuration that is migrated from a source server to a target server might reference or depend on a module that is not installed on the target server. The JBoss Server Migration Tool detects this and automatically migrates the referenced modules, plus their dependent modules, from the source server to the target server. A module referenced by a host server configuration is migrated using the following process. A module referenced by a security realm configuration is migrated as a plug-in module. The console logs a message noting the module ID for any module that is migrated. It is possible to exclude the migration of specific modules by specifying the module ID in the modules.excludes environment property. See Configuring the Migration of Modules for more information. 7.3.2. Migrate Referenced Paths A configuration that is migrated from a source server to a target server might reference or depend on file paths and directories that must also be migrated to the target server. The JBoss Server Migration Tool does not migrate absolute path references. It only migrates files or directories that are configured as relative to the source configuration. The console logs a message noting each path that is migrated. The JBoss Server Migration Tool automatically migrates the following path references: Vault keystore and encrypted file's directory. To skip the migration of referenced paths, set the paths.migrate-paths-requested-by-configuration.vault.skip environment property to true . 7.3.3. Migrate Compatible Security Realms Because the JBoss EAP 7.4 security realm configurations are fully compatible with the JBoss EAP 7.3 security realm configurations, they require no update by the JBoss Server Migration Tool. However, if the application-users.properties , application-roles.properties , mgmt-users.properties , and mgmt-groups.properties files are not referenced using an absolute path, the tool copies them to the path expected by the migrated configuration file. To skip the security realms migration, set the security-realms.migrate-properties.skip environment property to true .
[ "INFO Subsystem health added.", "INFO Subsystem metrics added.", "INFO [ServerMigrationTask#67] Persistent deployments found: [cmtool-helloworld3.war, cmtool-helloworld4.war, cmtool-helloworld2.war, cmtool-helloworld1.war]", "This tool is not able to assert if persistent deployments found are compatible with the target server, skip persistent deployments migration? yes/no?", "Migrate all persistent deployments found? yes/no?", "Migrate persistent deployment 'helloworld01.war'? yes/no?", "INFO [ServerMigrationTask#68] Removed persistent deployment from configuration /deployment=helloworld01.war", "This tool is not able to assert if the scanner's deployments found are compatible with the target server, skip scanner's deployments migration? yes/no?", "Migrate all scanner's deployments found? yes/no?", "Migrate scanner's deployment 'helloworld02.war'? yes/no?", "INFO [ServerMigrationTask#69] Resource with path EAP_HOME /standalone/deployments/helloworld02.war migrated.", "INFO Host-excludes configuration added.", "INFO [ServerMigrationTask#67] Persistent deployments found: [cmtool-helloworld3.war, cmtool-helloworld4.war, cmtool-helloworld2.war, cmtool-helloworld1.war]", "This tool is not able to assert if persistent deployments found are compatible with the target server, skip persistent deployments migration? yes/no?", "Migrate all persistent deployments found? yes/no?", "Migrate persistent deployment 'helloworld01.war'? yes/no?", "INFO [ServerMigrationTask#68] Removed persistent deployment from configuration /deployment=helloworld01.war" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_the_jboss_server_migration_tool/migrating_jboss_eap_7_3_configurations_to_jboss_eap_7_4
Chapter 6. Configuring a network bridge
Chapter 6. Configuring a network bridge A network bridge is a link-layer device which forwards traffic between networks based on a table of MAC addresses. The bridge builds the MAC addresses table by listening to network traffic and thereby learning what hosts are connected to each network. For example, you can use a software bridge on a Red Hat Enterprise Linux host to emulate a hardware bridge or in virtualization environments, to integrate virtual machines (VM) to the same network as the host. A bridge requires a network device in each network the bridge should connect. When you configure a bridge, the bridge is called controller and the devices it uses ports . You can create bridges on different types of devices, such as: Physical and virtual Ethernet devices Network bonds Network teams VLAN devices Due to the IEEE 802.11 standard which specifies the use of 3-address frames in Wi-Fi for the efficient use of airtime, you cannot configure a bridge over Wi-Fi networks operating in Ad-Hoc or Infrastructure modes. 6.1. Configuring a network bridge by using nmcli To configure a network bridge on the command line, use the nmcli utility. Prerequisites Two or more physical or virtual network devices are installed on the server. The host runs on Red Hat Enterprise Linux 9.4 or later. This version introduced the port-type , controller , connection.autoconnect-ports options used in this procedure. RHEL versions instead use slave-type , master , and connection.autoconnect-slaves . To use Ethernet devices as ports of the bridge, the physical or virtual Ethernet devices must be installed on the server. To use team, bond, or VLAN devices as ports of the bridge, you can either create these devices while you create the bridge or you can create them in advance as described in: Configuring a network team by using nmcli Configuring a network bond by using nmcli Configuring VLAN tagging by using nmcli Procedure Create a bridge interface: This command creates a bridge named bridge0 , enter: Display the network interfaces, and note the names of the interfaces you want to add to the bridge: In this example: enp7s0 and enp8s0 are not configured. To use these devices as ports, add connection profiles in the step. bond0 and bond1 have existing connection profiles. To use these devices as ports, modify their profiles in the step. Assign the interfaces to the bridge. If the interfaces you want to assign to the bridge are not configured, create new connection profiles for them: These commands create profiles for enp7s0 and enp8s0 , and add them to the bridge0 connection. If you want to assign an existing connection profile to the bridge: Set the controller parameter of these connections to bridge0 : These commands assign the existing connection profiles named bond0 and bond1 to the bridge0 connection. Reactivate the connections: Configure the IPv4 settings: If you plan to use this bridge device as a port of other devices, enter: To use DHCP, no action is required. To set a static IPv4 address, network mask, default gateway, and DNS server to the bridge0 connection, enter: Configure the IPv6 settings: If you plan to use this bridge device as a port of other devices, enter: To use stateless address autoconfiguration (SLAAC), no action is required. To set a static IPv6 address, network mask, default gateway, and DNS server to the bridge0 connection, enter: Optional: Configure further properties of the bridge. For example, to set the Spanning Tree Protocol (STP) priority of bridge0 to 16384 , enter: By default, STP is enabled. Activate the connection: Verify that the ports are connected, and the CONNECTION column displays the port's connection name: When you activate any port of the connection, NetworkManager also activates the bridge, but not the other ports of it. You can configure that Red Hat Enterprise Linux enables all ports automatically when the bridge is enabled: Enable the connection.autoconnect-ports parameter of the bridge connection: Reactivate the bridge: Verification Use the ip utility to display the link status of Ethernet devices that are ports of a specific bridge: Use the bridge utility to display the status of Ethernet devices that are ports of any bridge device: To display the status for a specific Ethernet device, use the bridge link show dev <ethernet_device_name> command. Additional resources bridge(8) and nm-settings(5) man pages on your system NetworkManager duplicates a connection after restart of NetworkManager service (Red Hat Knowledgebase) How to configure a bridge with VLAN information? (Red Hat Knowledgebase) 6.2. Configuring a network bridge by using the RHEL web console Use the RHEL web console to configure a network bridge if you prefer to manage network settings using a web browser-based interface. Prerequisites Two or more physical or virtual network devices are installed on the server. To use Ethernet devices as ports of the bridge, the physical or virtual Ethernet devices must be installed on the server. To use team, bond, or VLAN devices as ports of the bridge, you can either create these devices while you create the bridge or you can create them in advance as described in: Configuring a network team using the RHEL web console Configuring a network bond by using the RHEL web console Configuring VLAN tagging by using the RHEL web console You have installed the RHEL 9 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 9 web console. For details, see Logging in to the web console . Select the Networking tab in the navigation on the left side of the screen. Click Add bridge in the Interfaces section. Enter the name of the bridge device you want to create. Select the interfaces that should be ports of the bridge. Optional: Enable the Spanning tree protocol (STP) feature to avoid bridge loops and broadcast radiation. Click Apply . By default, the bridge uses a dynamic IP address. If you want to set a static IP address: Click the name of the bridge in the Interfaces section. Click Edit to the protocol you want to configure. Select Manual to Addresses , and enter the IP address, prefix, and default gateway. In the DNS section, click the + button, and enter the IP address of the DNS server. Repeat this step to set multiple DNS servers. In the DNS search domains section, click the + button, and enter the search domain. If the interface requires static routes, configure them in the Routes section. Click Apply Verification Select the Networking tab in the navigation on the left side of the screen, and check if there is incoming and outgoing traffic on the interface: 6.3. Configuring a network bridge by using nmtui The nmtui application provides a text-based user interface for NetworkManager. You can use nmtui to configure a network bridge on a host without a graphical interface. Note In nmtui : Navigate by using the cursor keys. Press a button by selecting it and hitting Enter . Select and clear checkboxes by using Space . To return to the screen, use ESC . Prerequisites Two or more physical or virtual network devices are installed on the server. To use Ethernet devices as ports of the bridge, the physical or virtual Ethernet devices must be installed on the server. Procedure If you do not know the network device names on which you want configure a network bridge, display the available devices: Start nmtui : Select Edit a connection , and press Enter . Press Add . Select Bridge from the list of network types, and press Enter . Optional: Enter a name for the NetworkManager profile to be created. On hosts with multiple profiles, a meaningful name makes it easier to identify the purpose of a profile. Enter the bridge device name to be created into the Device field. Add ports to the bridge to be created: Press Add to the Slaves list. Select the type of the interface you want to add as port to the bridge, for example, Ethernet . Optional: Enter a name for the NetworkManager profile to be created for this bridge port. Enter the port's device name into the Device field. Press OK to return to the window with the bridge settings. Figure 6.1. Adding an Ethernet device as port to a bridge Repeat these steps to add more ports to the bridge. Depending on your environment, configure the IP address settings in the IPv4 configuration and IPv6 configuration areas accordingly. For this, press the button to these areas, and select: Disabled , if the bridge does not require an IP address. Automatic , if a DHCP server or stateless address autoconfiguration (SLAAC) dynamically assigns an IP address to the bridge. Manual , if the network requires static IP address settings. In this case, you must fill further fields: Press Show to the protocol you want to configure to display additional fields. Press Add to Addresses , and enter the IP address and the subnet mask in Classless Inter-Domain Routing (CIDR) format. If you do not specify a subnet mask, NetworkManager sets a /32 subnet mask for IPv4 addresses and /64 for IPv6 addresses. Enter the address of the default gateway. Press Add to DNS servers , and enter the DNS server address. Press Add to Search domains , and enter the DNS search domain. Figure 6.2. Example of a bridge connection without IP address settings Press OK to create and automatically activate the new connection. Press Back to return to the main menu. Select Quit , and press Enter to close the nmtui application. Verification Use the ip utility to display the link status of Ethernet devices that are ports of a specific bridge: Use the bridge utility to display the status of Ethernet devices that are ports of any bridge device: To display the status for a specific Ethernet device, use the bridge link show dev <ethernet_device_name> command. 6.4. Configuring a network bridge by using nm-connection-editor If you use Red Hat Enterprise Linux with a graphical interface, you can configure network bridges using the nm-connection-editor application. Note that nm-connection-editor can add only new ports to a bridge. To use an existing connection profile as a port, create the bridge using the nmcli utility as described in Configuring a network bridge by using nmcli . Prerequisites Two or more physical or virtual network devices are installed on the server. To use Ethernet devices as ports of the bridge, the physical or virtual Ethernet devices must be installed on the server. To use team, bond, or VLAN devices as ports of the bridge, ensure that these devices are not already configured. Procedure Open a terminal, and enter nm-connection-editor : Click the + button to add a new connection. Select the Bridge connection type, and click Create . On the Bridge tab: Optional: Set the name of the bridge interface in the Interface name field. Click the Add button to create a new connection profile for a network interface and adding the profile as a port to the bridge. Select the connection type of the interface. For example, select Ethernet for a wired connection. Optional: Set a connection name for the port device. If you create a connection profile for an Ethernet device, open the Ethernet tab, and select in the Device field the network interface you want to add as a port to the bridge. If you selected a different device type, configure it accordingly. Click Save . Repeat the step for each interface you want to add to the bridge. Optional: Configure further bridge settings, such as Spanning Tree Protocol (STP) options. Configure the IP address settings on both the IPv4 Settings and IPv6 Settings tabs: If you plan to use this bridge device as a port of other devices, set the Method field to Disabled . To use DHCP, leave the Method field at its default, Automatic (DHCP) . To use static IP settings, set the Method field to Manual and fill the fields accordingly: Click Save . Close nm-connection-editor . Verification Use the ip utility to display the link status of Ethernet devices that are ports of a specific bridge. Use the bridge utility to display the status of Ethernet devices that are ports in any bridge device: To display the status for a specific Ethernet device, use the bridge link show dev ethernet_device_name command. Additional resources Configuring a network bond by using nm-connection-editor Configuring a network team by using nm-connection-editor Configuring VLAN tagging by using nm-connection-editor Configuring NetworkManager to avoid using a specific profile to provide a default gateway How to configure a bridge with VLAN information? (Red Hat Knowledgebase) 6.5. Configuring a network bridge by using nmstatectl Use the nmstatectl utility to configure a network bridge through the Nmstate API. The Nmstate API ensures that, after setting the configuration, the result matches the configuration file. If anything fails, nmstatectl automatically rolls back the changes to avoid leaving the system in an incorrect state. Depending on your environment, adjust the YAML file accordingly. For example, to use different devices than Ethernet adapters in the bridge, adapt the base-iface attribute and type attributes of the ports you use in the bridge. Prerequisites Two or more physical or virtual network devices are installed on the server. To use Ethernet devices as ports in the bridge, the physical or virtual Ethernet devices must be installed on the server. To use team, bond, or VLAN devices as ports in the bridge, set the interface name in the port list, and define the corresponding interfaces. The nmstate package is installed. Procedure Create a YAML file, for example ~/create-bridge.yml , with the following content: --- interfaces: - name: bridge0 type: linux-bridge state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false bridge: options: stp: enabled: true port: - name: enp1s0 - name: enp7s0 - name: enp1s0 type: ethernet state: up - name: enp7s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 -hop-address: 192.0.2.254 -hop-interface: bridge0 - destination: ::/0 -hop-address: 2001:db8:1::fffe -hop-interface: bridge0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb These settings define a network bridge with the following settings: Network interfaces in the bridge: enp1s0 and enp7s0 Spanning Tree Protocol (STP): Enabled Static IPv4 address: 192.0.2.1 with the /24 subnet mask Static IPv6 address: 2001:db8:1::1 with the /64 subnet mask IPv4 default gateway: 192.0.2.254 IPv6 default gateway: 2001:db8:1::fffe IPv4 DNS server: 192.0.2.200 IPv6 DNS server: 2001:db8:1::ffbb DNS search domain: example.com Apply the settings to the system: Verification Display the status of the devices and connections: Display all settings of the connection profile: Display the connection settings in YAML format: Additional resources nmstatectl(8) man page on your system /usr/share/doc/nmstate/examples/ directory How to configure a bridge with VLAN information? (Red Hat Knowledgebase) 6.6. Configuring a network bridge by using the network RHEL system role You can connect multiple networks on layer 2 of the Open Systems Interconnection (OSI) model by creating a network bridge. To configure a bridge, create a connection profile in NetworkManager. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure a bridge and, if a connection profile for the bridge's parent device does not exist, the role can create it as well. Note If you want to assign IP addresses, gateways, and DNS settings to a bridge, configure them on the bridge and not on its ports. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Two or more physical or virtual network devices are installed on the server. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bridge connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bridge profile - name: bridge0 type: bridge interface_name: bridge0 ip: dhcp4: yes auto6: yes state: up # Port profile for the 1st Ethernet device - name: bridge0-port1 interface_name: enp7s0 type: ethernet controller: bridge0 port_type: bridge state: up # Port profile for the 2nd Ethernet device - name: bridge0-port2 interface_name: enp8s0 type: ethernet controller: bridge0 port_type: bridge state: up The settings specified in the example playbook include the following: type: <profile_type> Sets the type of the profile to create. The example playbook creates three connection profiles: One for the bridge and two for the Ethernet devices. dhcp4: yes Enables automatic IPv4 address assignment from DHCP, PPP, or similar services. auto6: yes Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements. If the router announces the managed flag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Display the link status of Ethernet devices that are ports of a specific bridge: Display the status of Ethernet devices that are ports of any bridge device: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory
[ "nmcli connection add type bridge con-name bridge0 ifname bridge0", "nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet disconnected -- enp8s0 ethernet disconnected -- bond0 bond connected bond0 bond1 bond connected bond1", "nmcli connection add type ethernet port-type bridge con-name bridge0-port1 ifname enp7s0 controller bridge0 nmcli connection add type ethernet port-type bridge con-name bridge0-port2 ifname enp8s0 controller bridge0", "nmcli connection modify bond0 controller bridge0 nmcli connection modify bond1 controller bridge0", "nmcli connection up bond0 nmcli connection up bond1", "nmcli connection modify bridge0 ipv4.method disabled", "nmcli connection modify bridge0 ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253' ipv4.dns-search 'example.com' ipv4.method manual", "nmcli connection modify bridge0 ipv6.method disabled", "nmcli connection modify bridge0 ipv6.addresses '2001:db8:1::1/64' ipv6.gateway '2001:db8:1::fffe' ipv6.dns '2001:db8:1::fffd' ipv6.dns-search 'example.com' ipv6.method manual", "nmcli connection modify bridge0 bridge.priority '16384'", "nmcli connection up bridge0", "nmcli device DEVICE TYPE STATE CONNECTION enp7s0 ethernet connected bridge0-port1 enp8s0 ethernet connected bridge0-port2", "nmcli connection modify bridge0 connection.autoconnect-ports 1", "nmcli connection up bridge0", "ip link show master bridge0 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff", "bridge link show 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100 5: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state forwarding priority 32 cost 100 6: enp11s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state blocking priority 32 cost 100", "nmcli device status DEVICE TYPE STATE CONNECTION enp7s0 ethernet unavailable -- enp8s0 ethernet unavailable --", "nmtui", "ip link show master bridge0 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff", "bridge link show 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100", "nm-connection-editor", "ip link show master bridge0 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff", "bridge link show 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100 5: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state forwarding priority 32 cost 100 6: enp11s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge1 state blocking priority 32 cost 100", "--- interfaces: - name: bridge0 type: linux-bridge state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false bridge: options: stp: enabled: true port: - name: enp1s0 - name: enp7s0 - name: enp1s0 type: ethernet state: up - name: enp7s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: bridge0 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: bridge0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb", "nmstatectl apply ~/create-bridge.yml", "nmcli device status DEVICE TYPE STATE CONNECTION bridge0 bridge connected bridge0", "nmcli connection show bridge0 connection.id: bridge0_ connection.uuid: e2cc9206-75a2-4622-89cf-1252926060a9 connection.stable-id: -- connection.type: bridge connection.interface-name: bridge0", "nmstatectl show bridge0", "--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Bridge connection profile with two Ethernet ports ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Bridge profile - name: bridge0 type: bridge interface_name: bridge0 ip: dhcp4: yes auto6: yes state: up # Port profile for the 1st Ethernet device - name: bridge0-port1 interface_name: enp7s0 type: ethernet controller: bridge0 port_type: bridge state: up # Port profile for the 2nd Ethernet device - name: bridge0-port2 interface_name: enp8s0 type: ethernet controller: bridge0 port_type: bridge state: up", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'ip link show master bridge0' managed-node-01.example.com | CHANGED | rc=0 >> 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:62:61:0e brd ff:ff:ff:ff:ff:ff 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bridge0 state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:9e:f1:ce brd ff:ff:ff:ff:ff:ff", "ansible managed-node-01.example.com -m command -a 'bridge link show' managed-node-01.example.com | CHANGED | rc=0 >> 3: enp7s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state forwarding priority 32 cost 100 4: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master bridge0 state listening priority 32 cost 100" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/configuring-a-network-bridge_configuring-and-managing-networking
Chapter 64. KafkaExporterTemplate schema reference
Chapter 64. KafkaExporterTemplate schema reference Used in: KafkaExporterSpec Property Property type Description deployment DeploymentTemplate Template for Kafka Exporter Deployment . pod PodTemplate Template for Kafka Exporter Pods . service ResourceTemplate The service property has been deprecated. The Kafka Exporter service has been removed. Template for Kafka Exporter Service . container ContainerTemplate Template for the Kafka Exporter container. serviceAccount ResourceTemplate Template for the Kafka Exporter service account.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkaexportertemplate-reference
8.115. mesa
8.115. mesa 8.115.1. RHBA-2013:1559 - mesa bug fix and enhancement update Updated mesa packages that fix several bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. Mesa provides a 3D graphics API that is compatible with Open Graphics Library (OpenGL). It also provides hardware-accelerated drivers for many popular graphics chips. Bug Fixes BZ# 879637 On certain Intel GT2+ processors, segmentation faults could have been reported in the output of the dmesg command after running a Piglit quick-driver test. A patch has been applied to address his bug, and the unwanted behavior no longer occurs. BZ# 908547 Prior to this update, compressed texture size checks were performed in an incorrect manner. Consequently, checking the image size against the compression block size could cause certain applications to terminate unexpectedly. The underlying source code has been modified, and the texture error no longer causes the applications to crash in the described scenario. Enhancements BZ# 818345 Support for future Intel 2D and 3D graphics has been added to allow systems using future Intel processors to be certified through the Red Hat Hardware Certification program. BZ# 957792 With this update, the mesa-private-llvm library has been added. Users of mesa are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/mesa
Chapter 12. Managing AMQ Streams
Chapter 12. Managing AMQ Streams This chapter covers tasks to maintain a deployment of AMQ Streams. 12.1. Working with custom resources You can use oc commands to retrieve information and perform other operations on AMQ Streams custom resources. Using oc with the status subresource of a custom resource allows you to get the information about the resource. 12.1.1. Performing oc operations on custom resources Use oc commands, such as get , describe , edit , or delete , to perform operations on resource types. For example, oc get kafkatopics retrieves a list of all Kafka topics and oc get kafkas retrieves all deployed Kafka clusters. When referencing resource types, you can use both singular and plural names: oc get kafkas gets the same results as oc get kafka . You can also use the short name of the resource. Learning short names can save you time when managing AMQ Streams. The short name for Kafka is k , so you can also run oc get k to list all Kafka clusters. oc get k NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS my-cluster 3 3 Table 12.1. Long and short names for each AMQ Streams resource AMQ Streams resource Long name Short name Kafka kafka k Kafka Topic kafkatopic kt Kafka User kafkauser ku Kafka Connect kafkaconnect kc Kafka Connect S2I kafkaconnects2i kcs2i Kafka Connector kafkaconnector kctr Kafka Mirror Maker kafkamirrormaker kmm Kafka Mirror Maker 2 kafkamirrormaker2 kmm2 Kafka Bridge kafkabridge kb Kafka Rebalance kafkarebalance kr 12.1.1.1. Resource categories Categories of custom resources can also be used in oc commands. All AMQ Streams custom resources belong to the category strimzi , so you can use strimzi to get all the AMQ Streams resources with one command. For example, running oc get strimzi lists all AMQ Streams custom resources in a given namespace. oc get strimzi NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS kafka.kafka.strimzi.io/my-cluster 3 3 NAME PARTITIONS REPLICATION FACTOR kafkatopic.kafka.strimzi.io/kafka-apps 3 3 NAME AUTHENTICATION AUTHORIZATION kafkauser.kafka.strimzi.io/my-user tls simple The oc get strimzi -o name command returns all resource types and resource names. The -o name option fetches the output in the type/name format oc get strimzi -o name kafka.kafka.strimzi.io/my-cluster kafkatopic.kafka.strimzi.io/kafka-apps kafkauser.kafka.strimzi.io/my-user You can combine this strimzi command with other commands. For example, you can pass it into a oc delete command to delete all resources in a single command. oc delete USD(oc get strimzi -o name) kafka.kafka.strimzi.io "my-cluster" deleted kafkatopic.kafka.strimzi.io "kafka-apps" deleted kafkauser.kafka.strimzi.io "my-user" deleted Deleting all resources in a single operation might be useful, for example, when you are testing new AMQ Streams features. 12.1.1.2. Querying the status of sub-resources There are other values you can pass to the -o option. For example, by using -o yaml you get the output in YAML format. Usng -o json will return it as JSON. You can see all the options in oc get --help . One of the most useful options is the JSONPath support , which allows you to pass JSONPath expressions to query the Kubernetes API. A JSONPath expression can extract or navigate specific parts of any resource. For example, you can use the JSONPath expression {.status.listeners[?(@.type=="tls")].bootstrapServers} to get the bootstrap address from the status of the Kafka custom resource and use it in your Kafka clients. Here, the command finds the bootstrapServers value of the tls listeners. oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.type=="tls")].bootstrapServers}{"\n"}' my-cluster-kafka-bootstrap.myproject.svc:9093 By changing the type condition to @.type=="external" or @.type=="plain" you can also get the address of the other Kafka listeners. oc get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}{"\n"}' 192.168.1.247:9094 You can use jsonpath to extract any other property or group of properties from any custom resource. 12.1.2. AMQ Streams custom resource status information Several resources have a status property, as described in the following table. Table 12.2. Custom resource status properties AMQ Streams resource Schema reference Publishes status information on... Kafka Section 13.2.76, " KafkaStatus schema reference" The Kafka cluster. KafkaConnect Section 13.2.102, " KafkaConnectStatus schema reference" The Kafka Connect cluster, if deployed. KafkaConnectS2I Section 13.2.106, " KafkaConnectS2IStatus schema reference" The Kafka Connect cluster with Source-to-Image support, if deployed. KafkaConnector Section 13.2.141, " KafkaConnectorStatus schema reference" KafkaConnector resources, if deployed. KafkaMirrorMaker Section 13.2.129, " KafkaMirrorMakerStatus schema reference" The Kafka MirrorMaker tool, if deployed. KafkaTopic Section 13.2.109, " KafkaTopicStatus schema reference" Kafka topics in your Kafka cluster. KafkaUser Section 13.2.122, " KafkaUserStatus schema reference" Kafka users in your Kafka cluster. KafkaBridge Section 13.2.138, " KafkaBridgeStatus schema reference" The AMQ Streams Kafka Bridge, if deployed. The status property of a resource provides information on the resource's: Current state , in the status.conditions property Last observed generation , in the status.observedGeneration property The status property also provides resource-specific information. For example: KafkaStatus provides information on listener addresses, and the id of the Kafka cluster. KafkaConnectStatus provides the REST API endpoint for Kafka Connect connectors. KafkaUserStatus provides the user name of the Kafka user and the Secret in which their credentials are stored. KafkaBridgeStatus provides the HTTP address at which external client applications can access the Bridge service. A resource's current state is useful for tracking progress related to the resource achieving its desired state , as defined by the spec property. The status conditions provide the time and reason the state of the resource changed and details of events preventing or delaying the operator from realizing the resource's desired state. The last observed generation is the generation of the resource that was last reconciled by the Cluster Operator. If the value of observedGeneration is different from the value of metadata.generation , the operator has not yet processed the latest update to the resource. If these values are the same, the status information reflects the most recent changes to the resource. AMQ Streams creates and maintains the status of custom resources, periodically evaluating the current state of the custom resource and updating its status accordingly. When performing an update on a custom resource using oc edit , for example, its status is not editable. Moreover, changing the status would not affect the configuration of the Kafka cluster. Here we see the status property specified for a Kafka custom resource. Kafka custom resource with status apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # ... status: conditions: 1 - lastTransitionTime: 2021-07-23T23:46:57+0000 status: "True" type: Ready 2 observedGeneration: 4 3 listeners: 4 - addresses: - host: my-cluster-kafka-bootstrap.myproject.svc port: 9092 type: plain - addresses: - host: my-cluster-kafka-bootstrap.myproject.svc port: 9093 certificates: - | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- type: tls - addresses: - host: 172.29.49.180 port: 9094 certificates: - | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- type: external clusterId: CLUSTER-ID 5 # ... 1 Status conditions describe criteria related to the status that cannot be deduced from the existing resource information, or are specific to the instance of a resource. 2 The Ready condition indicates whether the Cluster Operator currently considers the Kafka cluster able to handle traffic. 3 The observedGeneration indicates the generation of the Kafka custom resource that was last reconciled by the Cluster Operator. 4 The listeners describe the current Kafka bootstrap addresses by type. 5 The Kafka cluster id. Important The address in the custom resource status for external listeners with type nodeport is currently not supported. Note The Kafka bootstrap addresses listed in the status do not signify that those endpoints or the Kafka cluster is in a ready state. Accessing status information You can access status information for a resource from the command line. For more information, see Section 12.1.3, "Finding the status of a custom resource" . 12.1.3. Finding the status of a custom resource This procedure describes how to find the status of a custom resource. Prerequisites An OpenShift cluster. The Cluster Operator is running. Procedure Specify the custom resource and use the -o jsonpath option to apply a standard JSONPath expression to select the status property: oc get kafka <kafka_resource_name> -o jsonpath='{.status}' This expression returns all the status information for the specified custom resource. You can use dot notation, such as status.listeners or status.observedGeneration , to fine-tune the status information you wish to see. Additional resources Section 12.1.2, "AMQ Streams custom resource status information" For more information about using JSONPath, see JSONPath support . 12.2. Pausing reconciliation of custom resources Sometimes it is useful to pause the reconciliation of custom resources managed by AMQ Streams Operators, so that you can perform fixes or make updates. If reconciliations are paused, any changes made to custom resources are ignored by the Operators until the pause ends. If you want to pause reconciliation of a custom resource, set the strimzi.io/pause-reconciliation annotation to true in its configuration. This instructs the appropriate Operator to pause reconciliation of the custom resource. For example, you can apply the annotation to the KafkaConnect resource so that reconciliation by the Cluster Operator is paused. You can also create a custom resource with the pause annotation enabled. The custom resource is created, but it is ignored. Important It is not currently possible to pause reconciliation of KafkaTopic resources. Prerequisites The AMQ Streams Operator that manages the custom resource is running. Procedure Annotate the custom resource in OpenShift, setting pause-reconciliation to true : oc annotate KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE strimzi.io/pause-reconciliation="true" For example, for the KafkaConnect custom resource: oc annotate KafkaConnect my-connect strimzi.io/pause-reconciliation="true" Check that the status conditions of the custom resource show a change to ReconciliationPaused : oc describe KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE The type condition changes to ReconciliationPaused at the lastTransitionTime . Example custom resource with a paused reconciliation condition type apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: "true" strimzi.io/use-connector-resources: "true" creationTimestamp: 2021-03-12T10:47:11Z #... spec: # ... status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: "True" type: ReconciliationPaused Resuming from pause To resume reconciliation, you can set the annotation to false , or remove the annotation. Additional resources Customizing OpenShift resources Finding the status of a custom resource 12.3. Manually starting rolling updates of Kafka and ZooKeeper clusters AMQ Streams supports the use of annotations on StatefulSet and Pod resources to manually trigger a rolling update of Kafka and ZooKeeper clusters through the Cluster Operator. Rolling updates restart the pods of the resource with new ones. Manually performing a rolling update on a specific pod or set of pods from the same StatefulSet is usually only required in exceptional circumstances. However, rather than deleting the pods directly, if you perform the rolling update through the Cluster Operator you ensure that: The manual deletion of the pod does not conflict with simultaneous Cluster Operator operations, such as deleting other pods in parallel. The Cluster Operator logic handles the Kafka configuration specifications, such as the number of in-sync replicas. 12.3.1. Prerequisites To perform a manual rolling update, you need a running Cluster Operator and Kafka cluster. See the Deploying and Upgrading AMQ Streams on OpenShift guide for instructions on running a: Cluster Operator Kafka cluster 12.3.2. Performing a rolling update using a StatefulSet annotation This procedure describes how to manually trigger a rolling update of an existing Kafka cluster or ZooKeeper cluster using an OpenShift StatefulSet annotation. Procedure Find the name of the StatefulSet that controls the Kafka or ZooKeeper pods you want to manually update. For example, if your Kafka cluster is named my-cluster , the corresponding StatefulSet names are my-cluster-kafka and my-cluster-zookeeper . Annotate the StatefulSet resource in OpenShift. Use oc annotate : oc annotate statefulset cluster-name -kafka strimzi.io/manual-rolling-update=true oc annotate statefulset cluster-name -zookeeper strimzi.io/manual-rolling-update=true Wait for the reconciliation to occur (every two minutes by default). A rolling update of all pods within the annotated StatefulSet is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of all the pods is complete, the annotation is removed from the StatefulSet . 12.3.3. Performing a rolling update using a Pod annotation This procedure describes how to manually trigger a rolling update of an existing Kafka cluster or ZooKeeper cluster using an OpenShift Pod annotation. When multiple pods from the same StatefulSet are annotated, consecutive rolling updates are performed within the same reconciliation run. Procedure Find the name of the Kafka or ZooKeeper Pod you want to manually update. For example, if your Kafka cluster is named my-cluster , the corresponding Pod names are my-cluster-kafka-index and my-cluster-zookeeper-index . The index starts at zero and ends at the total number of replicas. Annotate the Pod resource in OpenShift. Use oc annotate : oc annotate pod cluster-name -kafka- index strimzi.io/manual-rolling-update=true oc annotate pod cluster-name -zookeeper- index strimzi.io/manual-rolling-update=true Wait for the reconciliation to occur (every two minutes by default). A rolling update of the annotated Pod is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of a pod is complete, the annotation is removed from the Pod . 12.4. Discovering services using labels and annotations Service discovery makes it easier for client applications running in the same OpenShift cluster as AMQ Streams to interact with a Kafka cluster. A service discovery label and annotation is generated for services used to access the Kafka cluster: Internal Kafka bootstrap service HTTP Bridge service The label helps to make the service discoverable, and the annotation provides connection details that a client application can use to make the connection. The service discovery label, strimzi.io/discovery , is set as true for the Service resources. The service discovery annotation has the same key, providing connection details in JSON format for each service. Example internal Kafka bootstrap service apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { "port" : 9092, "tls" : false, "protocol" : "kafka", "auth" : "scram-sha-512" }, { "port" : 9093, "tls" : true, "protocol" : "kafka", "auth" : "tls" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: "true" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #... Example HTTP Bridge service apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { "port" : 8080, "tls" : false, "auth" : "none", "protocol" : "http" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: "true" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service 12.4.1. Returning connection details on services You can find the services by specifying the discovery label when fetching services from the command line or a corresponding API call. oc get service -l strimzi.io/discovery=true The connection details are returned when retrieving the service discovery label. 12.5. Recovering a cluster from persistent volumes You can recover a Kafka cluster from persistent volumes (PVs) if they are still present. You might want to do this, for example, after: A namespace was deleted unintentionally A whole OpenShift cluster is lost, but the PVs remain in the infrastructure 12.5.1. Recovery from namespace deletion Recovery from namespace deletion is possible because of the relationship between persistent volumes and namespaces. A PersistentVolume (PV) is a storage resource that lives outside of a namespace. A PV is mounted into a Kafka pod using a PersistentVolumeClaim (PVC), which lives inside a namespace. The reclaim policy for a PV tells a cluster how to act when a namespace is deleted. If the reclaim policy is set as: Delete (default), PVs are deleted when PVCs are deleted within a namespace Retain , PVs are not deleted when a namespace is deleted To ensure that you can recover from a PV if a namespace is deleted unintentionally, the policy must be reset from Delete to Retain in the PV specification using the persistentVolumeReclaimPolicy property: apiVersion: v1 kind: PersistentVolume # ... spec: # ... persistentVolumeReclaimPolicy: Retain Alternatively, PVs can inherit the reclaim policy of an associated storage class. Storage classes are used for dynamic volume allocation. By configuring the reclaimPolicy property for the storage class, PVs that use the storage class are created with the appropriate reclaim policy. The storage class is configured for the PV using the storageClassName property. apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # ... # ... reclaimPolicy: Retain apiVersion: v1 kind: PersistentVolume # ... spec: # ... storageClassName: gp2-retain Note If you are using Retain as the reclaim policy, but you want to delete an entire cluster, you need to delete the PVs manually. Otherwise they will not be deleted, and may cause unnecessary expenditure on resources. 12.5.2. Recovery from loss of an OpenShift cluster When a cluster is lost, you can use the data from disks/volumes to recover the cluster if they were preserved within the infrastructure. The recovery procedure is the same as with namespace deletion, assuming PVs can be recovered and they were created manually. 12.5.3. Recovering a deleted cluster from persistent volumes This procedure describes how to recover a deleted cluster from persistent volumes (PVs). In this situation, the Topic Operator identifies that topics exist in Kafka, but the KafkaTopic resources do not exist. When you get to the step to recreate your cluster, you have two options: Use Option 1 when you can recover all KafkaTopic resources. The KafkaTopic resources must therefore be recovered before the cluster is started so that the corresponding topics are not deleted by the Topic Operator. Use Option 2 when you are unable to recover all KafkaTopic resources. In this case, you deploy your cluster without the Topic Operator, delete the Topic Operator topic store metadata, and then redeploy the Kafka cluster with the Topic Operator so it can recreate the KafkaTopic resources from the corresponding topics. Note If the Topic Operator is not deployed, you only need to recover the PersistentVolumeClaim (PVC) resources. Before you begin In this procedure, it is essential that PVs are mounted into the correct PVC to avoid data corruption. A volumeName is specified for the PVC and this must match the name of the PV. For more information, see: Persistent Volume Claim naming JBOD and Persistent Volume Claims Note The procedure does not include recovery of KafkaUser resources, which must be recreated manually. If passwords and certificates need to be retained, secrets must be recreated before creating the KafkaUser resources. Procedure Check information on the PVs in the cluster: oc get pv Information is presented for PVs with data. Example output showing columns important to this procedure: NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2 NAME shows the name of each PV. RECLAIM POLICY shows that PVs are retained . CLAIM shows the link to the original PVCs. Recreate the original namespace: oc create namespace myproject Recreate the original PVC resource specifications, linking the PVCs to the appropriate PV: For example: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c Edit the PV specifications to delete the claimRef properties that bound the original PVC. For example: apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: "yes" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: "<date>" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: "39431" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem In the example, the following properties are deleted: claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: "39113" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea Deploy the Cluster Operator. oc create -f install/cluster-operator -n my-project Recreate your cluster. Follow the steps depending on whether or not you have all the KafkaTopic resources needed to recreate your cluster. Option 1 : If you have all the KafkaTopic resources that existed before you lost your cluster, including internal topics such as committed offsets from __consumer_offsets : Recreate all KafkaTopic resources. It is essential that you recreate the resources before deploying the cluster, or the Topic Operator will delete the topics. Deploy the Kafka cluster. For example: oc apply -f kafka.yaml Option 2 : If you do not have all the KafkaTopic resources that existed before you lost your cluster: Deploy the Kafka cluster, as with the first option, but without the Topic Operator by removing the topicOperator property from the Kafka resource before deploying. If you include the Topic Operator in the deployment, the Topic Operator will delete all the topics. Delete the internal topic store topics from the Kafka cluster: oc run kafka-admin -ti --image=registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete The command must correspond to the type of listener and authentication used to access the Kafka cluster. Enable the Topic Operator by redeploying the Kafka cluster with the topicOperator property to recreate the KafkaTopic resources. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} 1 #... 1 Here we show the default configuration, which has no additional properties. You specify the required configuration using the properties described in Section 13.2.67, " EntityTopicOperatorSpec schema reference" . Verify the recovery by listing the KafkaTopic resources: oc get KafkaTopic 12.6. Tuning client configuration Use configuration properties to optimize the performance of Kafka producers and consumers. A minimum set of configuration properties is required, but you can add or adjust properties to change how producers and consumers interact with Kafka. For example, for producers you can tune latency and throughput of messages so that clients can respond to data in real time. Or you can change the configuration to provide stronger message durability guarantees. You might start by analyzing client metrics to gauge where to make your initial configurations, then make incremental changes and further comparisons until you have the configuration you need. 12.6.1. Kafka producer configuration tuning Use a basic producer configuration with optional properties that are tailored to specific use cases. Adjusting your configuration to maximize throughput might increase latency or vice versa. You will need to experiment and tune your producer configuration to get the balance you need. 12.6.1.1. Basic producer configuration Connection and serializer properties are required for every producer. Generally, it is good practice to add a client id for tracking, and use compression on the producer to reduce batch sizes in requests. In a basic producer configuration: The order of messages in a partition is not guaranteed. The acknowledgment of messages reaching the broker does not guarantee durability. # ... bootstrap.servers=localhost:9092 1 key.serializer=org.apache.kafka.common.serialization.StringSerializer 2 value.serializer=org.apache.kafka.common.serialization.StringSerializer 3 client.id=my-client 4 compression.type=gzip 5 # ... 1 (Required) Tells the producer to connect to a Kafka cluster using a host:port bootstrap server address for a Kafka broker. The producer uses the address to discover and connect to all brokers in the cluster. Use a comma-separated list to specify two or three addresses in case a server is down, but it's not necessary to provide a list of all the brokers in the cluster. 2 (Required) Serializer to transform the key of each message to bytes prior to them being sent to a broker. 3 (Required) Serializer to transform the value of each message to bytes prior to them being sent to a broker. 4 (Optional) The logical name for the client, which is used in logs and metrics to identify the source of a request. 5 (Optional) The codec for compressing messages, which are sent and might be stored in compressed format and then decompressed when reaching a consumer. Compression is useful for improving throughput and reducing the load on storage, but might not be suitable for low latency applications where the cost of compression or decompression could be prohibitive. 12.6.1.2. Data durability You can apply greater data durability, to minimize the likelihood that messages are lost, using message delivery acknowledgments. 1 Specifying acks=all forces a partition leader to replicate messages to a certain number of followers before acknowledging that the message request was successfully received. Because of the additional checks, acks=all increases the latency between the producer sending a message and receiving acknowledgment. The number of brokers which need to have appended the messages to their logs before the acknowledgment is sent to the producer is determined by the topic's min.insync.replicas configuration. A typical starting point is to have a topic replication factor of 3, with two in-sync replicas on other brokers. In this configuration, the producer can continue unaffected if a single broker is unavailable. If a second broker becomes unavailable, the producer won't receive acknowledgments and won't be able to produce more messages. Topic configuration to support acks=all 1 Use 2 in-sync replicas. The default is 1 . Note If the system fails, there is a risk of unsent data in the buffer being lost. 12.6.1.3. Ordered delivery Idempotent producers avoid duplicates as messages are delivered exactly once. IDs and sequence numbers are assigned to messages to ensure the order of delivery, even in the event of failure. If you are using acks=all for data consistency, enabling idempotency makes sense for ordered delivery. Ordered delivery with idempotency 1 Set to true to enable the idempotent producer. 2 With idempotent delivery the number of in-flight requests may be greater than 1 while still providing the message ordering guarantee. The default is 5 in-flight requests. 3 Set acks to all . 4 Set the number of attempts to resend a failed message request. If you are not using acks=all and idempotency because of the performance cost, set the number of in-flight (unacknowledged) requests to 1 to preserve ordering. Otherwise, a situation is possible where Message-A fails only to succeed after Message-B was already written to the broker. Ordered delivery without idempotency 1 Set to false to disable the idempotent producer. 2 Set the number of in-flight requests to exactly 1 . 12.6.1.4. Reliability guarantees Idempotence is useful for exactly once writes to a single partition. Transactions, when used with idempotence, allow exactly once writes across multiple partitions. Transactions guarantee that messages using the same transactional ID are produced once, and either all are successfully written to the respective logs or none of them are. # ... enable.idempotence=true max.in.flight.requests.per.connection=5 acks=all retries=2147483647 transactional.id= UNIQUE-ID 1 transaction.timeout.ms=900000 2 # ... 1 Specify a unique transactional ID. 2 Set the maximum allowed time for transactions in milliseconds before a timeout error is returned. The default is 900000 or 15 minutes. The choice of transactional.id is important in order that the transactional guarantee is maintained. Each transactional id should be used for a unique set of topic partitions. For example, this can be achieved using an external mapping of topic partition names to transactional ids, or by computing the transactional id from the topic partition names using a function that avoids collisions. 12.6.1.5. Optimizing throughput and latency Usually, the requirement of a system is to satisfy a particular throughput target for a proportion of messages within a given latency. For example, targeting 500,000 messages per second with 95% of messages being acknowledged within 2 seconds. It's likely that the messaging semantics (message ordering and durability) of your producer are defined by the requirements for your application. For instance, it's possible that you don't have the option of using acks=0 or acks=1 without breaking some important property or guarantee provided by your application. Broker restarts have a significant impact on high percentile statistics. For example, over a long period the 99th percentile latency is dominated by behavior around broker restarts. This is worth considering when designing benchmarks or comparing performance numbers from benchmarking with performance numbers seen in production. Depending on your objective, Kafka offers a number of configuration parameters and techniques for tuning producer performance for throughput and latency. Message batching ( linger.ms and batch.size ) Message batching delays sending messages in the hope that more messages destined for the same broker will be sent, allowing them to be batched into a single produce request. Batching is a compromise between higher latency in return for higher throughput. Time-based batching is configured using linger.ms , and size-based batching is configured using batch.size . Compression ( compression.type ) Message compression adds latency in the producer (CPU time spent compressing the messages), but makes requests (and potentially disk writes) smaller, which can increase throughput. Whether compression is worthwhile, and the best compression to use, will depend on the messages being sent. Compression happens on the thread which calls KafkaProducer.send() , so if the latency of this method matters for your application you should consider using more threads. Pipelining ( max.in.flight.requests.per.connection ) Pipelining means sending more requests before the response to a request has been received. In general more pipelining means better throughput, up to a threshold at which other effects, such as worse batching, start to counteract the effect on throughput. Lowering latency When your application calls KafkaProducer.send() the messages are: Processed by any interceptors Serialized Assigned to a partition Compressed Added to a batch of messages in a per-partition queue At which point the send() method returns. So the time send() is blocked is determined by: The time spent in the interceptors, serializers and partitioner The compression algorithm used The time spent waiting for a buffer to use for compression Batches will remain in the queue until one of the following occurs: The batch is full (according to batch.size ) The delay introduced by linger.ms has passed The sender is about to send message batches for other partitions to the same broker, and it is possible to add this batch too The producer is being flushed or closed Look at the configuration for batching and buffering to mitigate the impact of send() blocking on latency. 1 The linger property adds a delay in milliseconds so that larger batches of messages are accumulated and sent in a request. The default is 0'. 2 If a maximum batch.size in bytes is used, a request is sent when the maximum is reached, or messages have been queued for longer than linger.ms (whichever comes sooner). Adding the delay allows batches to accumulate messages up to the batch size. 3 The buffer size must be at least as big as the batch size, and be able to accommodate buffering, compression and in-flight requests. Increasing throughput Improve throughput of your message requests by adjusting the maximum time to wait before a message is delivered and completes a send request. You can also direct messages to a specified partition by writing a custom partitioner to replace the default. 1 The maximum time in milliseconds to wait for a complete send request. You can set the value to MAX_LONG to delegate to Kafka an indefinite number of retries. The default is 120000 or 2 minutes. 2 Specify the class name of the custom partitioner. 12.6.2. Kafka consumer configuration tuning Use a basic consumer configuration with optional properties that are tailored to specific use cases. When tuning your consumers your primary concern will be ensuring that they cope efficiently with the amount of data ingested. As with the producer tuning, be prepared to make incremental changes until the consumers operate as expected. 12.6.2.1. Basic consumer configuration Connection and deserializer properties are required for every consumer. Generally, it is good practice to add a client id for tracking. In a consumer configuration, irrespective of any subsequent configuration: The consumer fetches from a given offset and consumes the messages in order, unless the offset is changed to skip or re-read messages. The broker does not know if the consumer processed the responses, even when committing offsets to Kafka, because the offsets might be sent to a different broker in the cluster. # ... bootstrap.servers=localhost:9092 1 key.deserializer=org.apache.kafka.common.serialization.StringDeserializer 2 value.deserializer=org.apache.kafka.common.serialization.StringDeserializer 3 client.id=my-client 4 group.id=my-group-id 5 # ... 1 (Required) Tells the consumer to connect to a Kafka cluster using a host:port bootstrap server address for a Kafka broker. The consumer uses the address to discover and connect to all brokers in the cluster. Use a comma-separated list to specify two or three addresses in case a server is down, but it is not necessary to provide a list of all the brokers in the cluster. If you are using a loadbalancer service to expose the Kafka cluster, you only need the address for the service because the availability is handled by the loadbalancer. 2 (Required) Deserializer to transform the bytes fetched from the Kafka broker into message keys. 3 (Required) Deserializer to transform the bytes fetched from the Kafka broker into message values. 4 (Optional) The logical name for the client, which is used in logs and metrics to identify the source of a request. The id can also be used to throttle consumers based on processing time quotas. 5 (Conditional) A group id is required for a consumer to be able to join a consumer group. Consumer groups are used to share a typically large data stream generated by multiple producers from a given topic. Consumers are grouped using a group.id , allowing messages to be spread across the members. 12.6.2.2. Scaling data consumption using consumer groups Consumer groups share a typically large data stream generated by one or multiple producers from a given topic. Consumers with the same group.id property are in the same group. One of the consumers in the group is elected leader and decides how the partitions are assigned to the consumers in the group. Each partition can only be assigned to a single consumer. If you do not already have as many consumers as partitions, you can scale data consumption by adding more consumer instances with the same group.id . Adding more consumers to a group than there are partitions will not help throughput, but it does mean that there are consumers on standby should one stop functioning. If you can meet throughput goals with fewer consumers, you save on resources. Consumers within the same consumer group send offset commits and heartbeats to the same broker. So the greater the number of consumers in the group, the higher the request load on the broker. 1 Add a consumer to a consumer group using a group id. 12.6.2.3. Message ordering guarantees Kafka brokers receive fetch requests from consumers that ask the broker to send messages from a list of topics, partitions and offset positions. A consumer observes messages in a single partition in the same order that they were committed to the broker, which means that Kafka only provides ordering guarantees for messages in a single partition. Conversely, if a consumer is consuming messages from multiple partitions, the order of messages in different partitions as observed by the consumer does not necessarily reflect the order in which they were sent. If you want a strict ordering of messages from one topic, use one partition per consumer. 12.6.2.4. Optimizing throughput and latency Control the number of messages returned when your client application calls KafkaConsumer.poll() . Use the fetch.max.wait.ms and fetch.min.bytes properties to increase the minimum amount of data fetched by the consumer from the Kafka broker. Time-based batching is configured using fetch.max.wait.ms , and size-based batching is configured using fetch.min.bytes . If CPU utilization in the consumer or broker is high, it might be because there are too many requests from the consumer. You can adjust fetch.max.wait.ms and fetch.min.bytes properties higher so that there are fewer requests and messages are delivered in bigger batches. By adjusting higher, throughput is improved with some cost to latency. You can also adjust higher if the amount of data being produced is low. For example, if you set fetch.max.wait.ms to 500ms and fetch.min.bytes to 16384 bytes, when Kafka receives a fetch request from the consumer it will respond when the first of either threshold is reached. Conversely, you can adjust the fetch.max.wait.ms and fetch.min.bytes properties lower to improve end-to-end latency. 1 The maximum time in milliseconds the broker will wait before completing fetch requests. The default is 500 milliseconds. 2 If a minimum batch size in bytes is used, a request is sent when the minimum is reached, or messages have been queued for longer than fetch.max.wait.ms (whichever comes sooner). Adding the delay allows batches to accumulate messages up to the batch size. Lowering latency by increasing the fetch request size Use the fetch.max.bytes and max.partition.fetch.bytes properties to increase the maximum amount of data fetched by the consumer from the Kafka broker. The fetch.max.bytes property sets a maximum limit in bytes on the amount of data fetched from the broker at one time. The max.partition.fetch.bytes sets a maximum limit in bytes on how much data is returned for each partition, which must always be larger than the number of bytes set in the broker or topic configuration for max.message.bytes . The maximum amount of memory a client can consume is calculated approximately as: NUMBER-OF-BROKERS * fetch.max.bytes and NUMBER-OF-PARTITIONS * max.partition.fetch.bytes If memory usage can accommodate it, you can increase the values of these two properties. By allowing more data in each request, latency is improved as there are fewer fetch requests. 1 The maximum amount of data in bytes returned for a fetch request. 2 The maximum amount of data in bytes returned for each partition. 12.6.2.5. Avoiding data loss or duplication when committing offsets The Kafka auto-commit mechanism allows a consumer to commit the offsets of messages automatically. If enabled, the consumer will commit offsets received from polling the broker at 5000ms intervals. The auto-commit mechanism is convenient, but it introduces a risk of data loss and duplication. If a consumer has fetched and transformed a number of messages, but the system crashes with processed messages in the consumer buffer when performing an auto-commit, that data is lost. If the system crashes after processing the messages, but before performing the auto-commit, the data is duplicated on another consumer instance after rebalancing. Auto-committing can avoid data loss only when all messages are processed before the poll to the broker, or the consumer closes. To minimize the likelihood of data loss or duplication, you can set enable.auto.commit to false and develop your client application to have more control over committing offsets. Or you can use auto.commit.interval.ms to decrease the intervals between commits. 1 Auto commit is set to false to provide more control over committing offsets. By setting to enable.auto.commit to false , you can commit offsets after all processing has been performed and the message has been consumed. For example, you can set up your application to call the Kafka commitSync and commitAsync commit APIs. The commitSync API commits the offsets in a message batch returned from polling. You call the API when you are finished processing all the messages in the batch. If you use the commitSync API, the application will not poll for new messages until the last offset in the batch is committed. If this negatively affects throughput, you can commit less frequently, or you can use the commitAsync API. The commitAsync API does not wait for the broker to respond to a commit request, but risks creating more duplicates when rebalancing. A common approach is to combine both commit APIs in an application, with the commitSync API used just before shutting the consumer down or rebalancing to make sure the final commit is successful. 12.6.2.5.1. Controlling transactional messages Consider using transactional ids and enabling idempotence ( enable.idempotence=true ) on the producer side to guarantee exactly-once delivery. On the consumer side, you can then use the isolation.level property to control how transactional messages are read by the consumer. The isolation.level property has two valid values: read_committed read_uncommitted (default) Use read_committed to ensure that only transactional messages that have been committed are read by the consumer. However, this will cause an increase in end-to-end latency, because the consumer will not be able to return a message until the brokers have written the transaction markers that record the result of the transaction ( committed or aborted ). 1 Set to read_committed so that only committed messages are read by the consumer. 12.6.2.6. Recovering from failure to avoid data loss Use the session.timeout.ms and heartbeat.interval.ms properties to configure the time taken to check and recover from consumer failure within a consumer group. The session.timeout.ms property specifies the maximum amount of time in milliseconds a consumer within a consumer group can be out of contact with a broker before being considered inactive and a rebalancing is triggered between the active consumers in the group. When the group rebalances, the partitions are reassigned to the members of the group. The heartbeat.interval.ms property specifies the interval in milliseconds between heartbeat checks to the consumer group coordinator to indicate that the consumer is active and connected. The heartbeat interval must be lower, usually by a third, than the session timeout interval. If you set the session.timeout.ms property lower, failing consumers are detected earlier, and rebalancing can take place quicker. However, take care not to set the timeout so low that the broker fails to receive a heartbeat in time and triggers an unnecessary rebalance. Decreasing the heartbeat interval reduces the chance of accidental rebalancing, but more frequent heartbeats increases the overhead on broker resources. 12.6.2.7. Managing offset policy Use the auto.offset.reset property to control how a consumer behaves when no offsets have been committed, or a committed offset is no longer valid or deleted. Suppose you deploy a consumer application for the first time, and it reads messages from an existing topic. Because this is the first time the group.id is used, the __consumer_offsets topic does not contain any offset information for this application. The new application can start processing all existing messages from the start of the log or only new messages. The default reset value is latest , which starts at the end of the partition, and consequently means some messages are missed. To avoid data loss, but increase the amount of processing, set auto.offset.reset to earliest to start at the beginning of the partition. Also consider using the earliest option to avoid messages being lost when the offsets retention period ( offsets.retention.minutes ) configured for a broker has ended. If a consumer group or standalone consumer is inactive and commits no offsets during the retention period, previously committed offsets are deleted from __consumer_offsets . 1 Adjust the heartbeat interval lower according to anticipated rebalances. 2 If no heartbeats are received by the Kafka broker before the timeout duration expires, the consumer is removed from the consumer group and a rebalance is initiated. If the broker configuration has a group.min.session.timeout.ms and group.max.session.timeout.ms , the session timeout value must be within that range. 3 Set to earliest to return to the start of a partition and avoid data loss if offsets were not committed. If the amount of data returned in a single fetch request is large, a timeout might occur before the consumer has processed it. In this case, you can lower max.partition.fetch.bytes or increase session.timeout.ms . 12.6.2.8. Minimizing the impact of rebalances The rebalancing of a partition between active consumers in a group is the time it takes for: Consumers to commit their offsets The new consumer group to be formed The group leader to assign partitions to group members The consumers in the group to receive their assignments and start fetching Clearly, the process increases the downtime of a service, particularly when it happens repeatedly during a rolling restart of a consumer group cluster. In this situation, you can use the concept of static membership to reduce the number of rebalances. Rebalancing assigns topic partitions evenly among consumer group members. Static membership uses persistence so that a consumer instance is recognized during a restart after a session timeout. The consumer group coordinator can identify a new consumer instance using a unique id that is specified using the group.instance.id property. During a restart, the consumer is assigned a new member id, but as a static member it continues with the same instance id, and the same assignment of topic partitions is made. If the consumer application does not make a call to poll at least every max.poll.interval.ms milliseconds, the consumer is considered to be failed, causing a rebalance. If the application cannot process all the records returned from poll in time, you can avoid a rebalance by using the max.poll.interval.ms property to specify the interval in milliseconds between polls for new messages from a consumer. Or you can use the max.poll.records property to set a maximum limit on the number of records returned from the consumer buffer, allowing your application to process fewer records within the max.poll.interval.ms limit. # ... group.instance.id= UNIQUE-ID 1 max.poll.interval.ms=300000 2 max.poll.records=500 3 # ... 1 The unique instance id ensures that a new consumer instance receives the same assignment of topic partitions. 2 Set the interval to check the consumer is continuing to process messages. 3 Sets the number of processed records returned from the consumer. 12.7. Uninstalling AMQ Streams This procedure describes how to uninstall AMQ Streams and remove resources related to the deployment. Prerequisites In order to perform this procedure, identify resources created specifically for a deployment and referenced from the AMQ Streams resource. Such resources include: Secrets (Custom CAs and certificates, Kafka Connect secrets, and other Kafka secrets) Logging ConfigMaps (of type external ) These are resources referenced by Kafka , KafkaConnect , KafkaConnectS2I , KafkaMirrorMaker , or KafkaBridge configuration. Procedure Delete the Cluster Operator Deployment , related CustomResourceDefinitions , and RBAC resources: Warning Deleting CustomResourceDefinitions results in the garbage collection of the corresponding custom resources ( Kafka , KafkaConnect , KafkaConnectS2I , KafkaMirrorMaker , or KafkaBridge ) and the resources dependent on them (Deployments, StatefulSets, and other dependent resources). Delete the resources you identified in the prerequisites. 12.8. Frequently asked questions 12.8.1. Questions related to the Cluster Operator 12.8.1.1. Why do I need cluster administrator privileges to install AMQ Streams? To install AMQ Streams, you need to be able to create the following cluster-scoped resources: Custom Resource Definitions (CRDs) to instruct OpenShift about resources that are specific to AMQ Streams, such as Kafka and KafkaConnect ClusterRoles and ClusterRoleBindings Cluster-scoped resources, which are not scoped to a particular OpenShift namespace, typically require cluster administrator privileges to install. As a cluster administrator, you can inspect all the resources being installed (in the /install/ directory) to ensure that the ClusterRoles do not grant unnecessary privileges. After installation, the Cluster Operator runs as a regular Deployment , so any standard (non-admin) OpenShift user with privileges to access the Deployment can configure it. The cluster administrator can grant standard users the privileges necessary to manage Kafka custom resources. See also: Why does the Cluster Operator need to create ClusterRoleBindings ? Can standard OpenShift users create Kafka custom resources? 12.8.1.2. Why does the Cluster Operator need to create ClusterRoleBindings ? OpenShift has built-in privilege escalation prevention , which means that the Cluster Operator cannot grant privileges it does not have itself, specifically, it cannot grant such privileges in a namespace it cannot access. Therefore, the Cluster Operator must have the privileges necessary for all the components it orchestrates. The Cluster Operator needs to be able to grant access so that: The Topic Operator can manage KafkaTopics , by creating Roles and RoleBindings in the namespace that the operator runs in The User Operator can manage KafkaUsers , by creating Roles and RoleBindings in the namespace that the operator runs in The failure domain of a Node is discovered by AMQ Streams, by creating a ClusterRoleBinding When using rack-aware partition assignment, the broker pod needs to be able to get information about the Node it is running on, for example, the Availability Zone in Amazon AWS. A Node is a cluster-scoped resource, so access to it can only be granted through a ClusterRoleBinding , not a namespace-scoped RoleBinding . 12.8.1.3. Can standard OpenShift users create Kafka custom resources? By default, standard OpenShift users will not have the privileges necessary to manage the custom resources handled by the Cluster Operator. The cluster administrator can grant a user the necessary privileges using OpenShift RBAC resources. For more information, see Designating AMQ Streams administrators in the Deploying and Upgrading AMQ Streams on OpenShift guide. 12.8.1.4. What do the failed to acquire lock warnings in the log mean? For each cluster, the Cluster Operator executes only one operation at a time. The Cluster Operator uses locks to make sure that there are never two parallel operations running for the same cluster. Other operations must wait until the current operation completes before the lock is released. INFO Examples of cluster operations include cluster creation , rolling update , scale down , and scale up . If the waiting time for the lock takes too long, the operation times out and the following warning message is printed to the log: 2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster Depending on the exact configuration of STRIMZI_FULL_RECONCILIATION_INTERVAL_MS and STRIMZI_OPERATION_TIMEOUT_MS , this warning message might appear occasionally without indicating any underlying issues. Operations that time out are picked up in the periodic reconciliation, so that the operation can acquire the lock and execute again. Should this message appear periodically, even in situations when there should be no other operations running for a given cluster, it might indicate that the lock was not properly released due to an error. If this is the case, try restarting the Cluster Operator. 12.8.1.5. Why is hostname verification failing when connecting to NodePorts using TLS? Currently, off-cluster access using NodePorts with TLS encryption enabled does not support TLS hostname verification. As a result, the clients that verify the hostname will fail to connect. For example, the Java client will fail with the following exception: Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more To connect, you must disable hostname verification. In the Java client, you can do this by setting the configuration option ssl.endpoint.identification.algorithm to an empty string. When configuring the client using a properties file, you can do it this way: ssl.endpoint.identification.algorithm= When configuring the client directly in Java, set the configuration option to an empty string: props.put("ssl.endpoint.identification.algorithm", "");
[ "get k NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS my-cluster 3 3", "get strimzi NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS kafka.kafka.strimzi.io/my-cluster 3 3 NAME PARTITIONS REPLICATION FACTOR kafkatopic.kafka.strimzi.io/kafka-apps 3 3 NAME AUTHENTICATION AUTHORIZATION kafkauser.kafka.strimzi.io/my-user tls simple", "get strimzi -o name kafka.kafka.strimzi.io/my-cluster kafkatopic.kafka.strimzi.io/kafka-apps kafkauser.kafka.strimzi.io/my-user", "delete USD(oc get strimzi -o name) kafka.kafka.strimzi.io \"my-cluster\" deleted kafkatopic.kafka.strimzi.io \"kafka-apps\" deleted kafkauser.kafka.strimzi.io \"my-user\" deleted", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.type==\"tls\")].bootstrapServers}{\"\\n\"}' my-cluster-kafka-bootstrap.myproject.svc:9093", "get kafka my-cluster -o=jsonpath='{.status.listeners[?(@.type==\"external\")].bootstrapServers}{\"\\n\"}' 192.168.1.247:9094", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: spec: # status: conditions: 1 - lastTransitionTime: 2021-07-23T23:46:57+0000 status: \"True\" type: Ready 2 observedGeneration: 4 3 listeners: 4 - addresses: - host: my-cluster-kafka-bootstrap.myproject.svc port: 9092 type: plain - addresses: - host: my-cluster-kafka-bootstrap.myproject.svc port: 9093 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- type: tls - addresses: - host: 172.29.49.180 port: 9094 certificates: - | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- type: external clusterId: CLUSTER-ID 5", "get kafka <kafka_resource_name> -o jsonpath='{.status}'", "annotate KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE strimzi.io/pause-reconciliation=\"true\"", "annotate KafkaConnect my-connect strimzi.io/pause-reconciliation=\"true\"", "describe KIND-OF-CUSTOM-RESOURCE NAME-OF-CUSTOM-RESOURCE", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: annotations: strimzi.io/pause-reconciliation: \"true\" strimzi.io/use-connector-resources: \"true\" creationTimestamp: 2021-03-12T10:47:11Z # spec: # status: conditions: - lastTransitionTime: 2021-03-12T10:47:41.689249Z status: \"True\" type: ReconciliationPaused", "annotate statefulset cluster-name -kafka strimzi.io/manual-rolling-update=true annotate statefulset cluster-name -zookeeper strimzi.io/manual-rolling-update=true", "annotate pod cluster-name -kafka- index strimzi.io/manual-rolling-update=true annotate pod cluster-name -zookeeper- index strimzi.io/manual-rolling-update=true", "apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 9092, \"tls\" : false, \"protocol\" : \"kafka\", \"auth\" : \"scram-sha-512\" }, { \"port\" : 9093, \"tls\" : true, \"protocol\" : \"kafka\", \"auth\" : \"tls\" } ] labels: strimzi.io/cluster: my-cluster strimzi.io/discovery: \"true\" strimzi.io/kind: Kafka strimzi.io/name: my-cluster-kafka-bootstrap name: my-cluster-kafka-bootstrap spec: #", "apiVersion: v1 kind: Service metadata: annotations: strimzi.io/discovery: |- [ { \"port\" : 8080, \"tls\" : false, \"auth\" : \"none\", \"protocol\" : \"http\" } ] labels: strimzi.io/cluster: my-bridge strimzi.io/discovery: \"true\" strimzi.io/kind: KafkaBridge strimzi.io/name: my-bridge-bridge-service", "get service -l strimzi.io/discovery=true", "apiVersion: v1 kind: PersistentVolume spec: # persistentVolumeReclaimPolicy: Retain", "apiVersion: v1 kind: StorageClass metadata: name: gp2-retain parameters: # reclaimPolicy: Retain", "apiVersion: v1 kind: PersistentVolume spec: # storageClassName: gp2-retain", "get pv", "NAME RECLAIMPOLICY CLAIM pvc-5e9c5c7f-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-my-cluster-zookeeper-1 pvc-5e9cc72d-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-0 pvc-5ead43d1-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-my-cluster-zookeeper-2 pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c ... Retain ... myproject/data-0-my-cluster-kafka-0 pvc-7e21042e-3317-11ea-9786-02deaf9aa87e ... Retain ... myproject/data-0-my-cluster-kafka-1 pvc-7e226978-3317-11ea-97b0-0aef8816c7ea ... Retain ... myproject/data-0-my-cluster-kafka-2", "create namespace myproject", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: data-0-my-cluster-kafka-0 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: gp2-retain volumeMode: Filesystem volumeName: pvc-7e1f67f9-3317-11ea-a650-06e1eadd9a4c", "apiVersion: v1 kind: PersistentVolume metadata: annotations: kubernetes.io/createdby: aws-ebs-dynamic-provisioner pv.kubernetes.io/bound-by-controller: \"yes\" pv.kubernetes.io/provisioned-by: kubernetes.io/aws-ebs creationTimestamp: \"<date>\" finalizers: - kubernetes.io/pv-protection labels: failure-domain.beta.kubernetes.io/region: eu-west-1 failure-domain.beta.kubernetes.io/zone: eu-west-1c name: pvc-7e226978-3317-11ea-97b0-0aef8816c7ea resourceVersion: \"39431\" selfLink: /api/v1/persistentvolumes/pvc-7e226978-3317-11ea-97b0-0aef8816c7ea uid: 7efe6b0d-3317-11ea-a650-06e1eadd9a4c spec: accessModes: - ReadWriteOnce awsElasticBlockStore: fsType: xfs volumeID: aws://eu-west-1c/vol-09db3141656d1c258 capacity: storage: 100Gi claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: failure-domain.beta.kubernetes.io/zone operator: In values: - eu-west-1c - key: failure-domain.beta.kubernetes.io/region operator: In values: - eu-west-1 persistentVolumeReclaimPolicy: Retain storageClassName: gp2-retain volumeMode: Filesystem", "claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: data-0-my-cluster-kafka-2 namespace: myproject resourceVersion: \"39113\" uid: 54be1c60-3319-11ea-97b0-0aef8816c7ea", "create -f install/cluster-operator -n my-project", "apply -f kafka.yaml", "run kafka-admin -ti --image=registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} 1 #", "get KafkaTopic", "bootstrap.servers=localhost:9092 1 key.serializer=org.apache.kafka.common.serialization.StringSerializer 2 value.serializer=org.apache.kafka.common.serialization.StringSerializer 3 client.id=my-client 4 compression.type=gzip 5", "acks=all 1", "min.insync.replicas=2 1", "enable.idempotence=true 1 max.in.flight.requests.per.connection=5 2 acks=all 3 retries=2147483647 4", "enable.idempotence=false 1 max.in.flight.requests.per.connection=1 2 retries=2147483647", "enable.idempotence=true max.in.flight.requests.per.connection=5 acks=all retries=2147483647 transactional.id= UNIQUE-ID 1 transaction.timeout.ms=900000 2", "linger.ms=100 1 batch.size=16384 2 buffer.memory=33554432 3", "delivery.timeout.ms=120000 1 partitioner.class=my-custom-partitioner 2", "bootstrap.servers=localhost:9092 1 key.deserializer=org.apache.kafka.common.serialization.StringDeserializer 2 value.deserializer=org.apache.kafka.common.serialization.StringDeserializer 3 client.id=my-client 4 group.id=my-group-id 5", "group.id=my-group-id 1", "fetch.max.wait.ms=500 1 fetch.min.bytes=16384 2", "NUMBER-OF-BROKERS * fetch.max.bytes and NUMBER-OF-PARTITIONS * max.partition.fetch.bytes", "fetch.max.bytes=52428800 1 max.partition.fetch.bytes=1048576 2", "enable.auto.commit=false 1", "enable.auto.commit=false isolation.level=read_committed 1", "heartbeat.interval.ms=3000 1 session.timeout.ms=10000 2 auto.offset.reset=earliest 3", "group.instance.id= UNIQUE-ID 1 max.poll.interval.ms=300000 2 max.poll.records=500 3", "delete -f install/cluster-operator", "2018-03-04 17:09:24 WARNING AbstractClusterOperations:290 - Failed to acquire lock for kafka cluster lock::kafka::myproject::my-cluster", "Caused by: java.security.cert.CertificateException: No subject alternative names matching IP address 168.72.15.231 found at sun.security.util.HostnameChecker.matchIP(HostnameChecker.java:168) at sun.security.util.HostnameChecker.match(HostnameChecker.java:94) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:455) at sun.security.ssl.X509TrustManagerImpl.checkIdentity(X509TrustManagerImpl.java:436) at sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:252) at sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:136) at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1501) ... 17 more", "ssl.endpoint.identification.algorithm=", "props.put(\"ssl.endpoint.identification.algorithm\", \"\");" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_openshift/management-tasks-str
Chapter 1. Installing the Operating System
Chapter 1. Installing the Operating System Before setting up for specific development needs, the underlying system must be set up. Install Red Hat Enterprise Linux in the Workstation variant. Follow the instructions in the Red Hat Enterprise Linux Installation Guide . While installing, pay attention to software selection . Select the Development and Creative Workstation system profile and enable the installation of Add-ons appropriate for your development needs. The relevant Add-ons are listed in each of the following sections focusing on various types of development. To develop applications that cooperate closely with the Linux kernel such as drivers, enable automatic crash dumping with kdump during the installation. After the system itself is installed, register it and attach the required subscriptions. Follow the instructions in Red Hat Enterprise Linux System Administrator's Guide, Chapter 7., Registering the System and Managing Subscriptions . The following sections list the particular subscriptions that must be attached for the respective type of development. More recent versions of development tools and utilities are available as Red Hat Software Collections. For instructions on accessing Red Hat Software Collections, see Red Hat Software Collections Release Notes, Chapter 2., Installation . Additional Resources Red Hat Enterprise Linux Installation Guide - Subscription Manager Red Hat Subscription Management Red Hat Enterprise Linux 7 Package Manifest
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/developer_guide/setting-up_installing-system
Chapter 8. Working with clusters
Chapter 8. Working with clusters 8.1. Viewing system event information in an OpenShift Container Platform cluster Events in OpenShift Container Platform are modeled based on events that happen to API objects in an OpenShift Container Platform cluster. 8.1.1. Understanding events Events allow OpenShift Container Platform to record information about real-world events in a resource-agnostic manner. They also allow developers and administrators to consume information about system components in a unified way. 8.1.2. Viewing events using the CLI You can get a list of events in a given project using the CLI. Procedure To view events in a project use the following command: USD oc get events [-n <project>] 1 1 The name of the project. For example: USD oc get events -n openshift-config Example output LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image "gcr.io/google_containers/busybox" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image "gcr.io/google_containers/busybox" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network "openshift-sdn": cannot set "openshift-sdn" ifname to "eth0": no netns: failed to Statfs "/proc/33366/ns/net": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal To view events in your project from the OpenShift Container Platform console. Launch the OpenShift Container Platform console. Click Home Events and select your project. Move to resource that you want to see events. For example: Home Projects <project-name> <resource-name>. Many objects, such as pods and deployments, have their own Events tab as well, which shows events related to that object. 8.1.3. List of events This section describes the events of OpenShift Container Platform. Table 8.1. Configuration events Name Description FailedValidation Failed pod configuration validation. Table 8.2. Container events Name Description BackOff Back-off restarting failed the container. Created Container created. Failed Pull/Create/Start failed. Killing Killing the container. Started Container started. Preempting Preempting other pods. ExceededGracePeriod Container runtime did not stop the pod within specified grace period. Table 8.3. Health events Name Description Unhealthy Container is unhealthy. Table 8.4. Image events Name Description BackOff Back off Ctr Start, image pull. ErrImageNeverPull The image's NeverPull Policy is violated. Failed Failed to pull the image. InspectFailed Failed to inspect the image. Pulled Successfully pulled the image or the container image is already present on the machine. Pulling Pulling the image. Table 8.5. Image Manager events Name Description FreeDiskSpaceFailed Free disk space failed. InvalidDiskCapacity Invalid disk capacity. Table 8.6. Node events Name Description FailedMount Volume mount failed. HostNetworkNotSupported Host network not supported. HostPortConflict Host/port conflict. KubeletSetupFailed Kubelet setup failed. NilShaper Undefined shaper. NodeNotReady Node is not ready. NodeNotSchedulable Node is not schedulable. NodeReady Node is ready. NodeSchedulable Node is schedulable. NodeSelectorMismatching Node selector mismatch. OutOfDisk Out of disk. Rebooted Node rebooted. Starting Starting kubelet. FailedAttachVolume Failed to attach volume. FailedDetachVolume Failed to detach volume. VolumeResizeFailed Failed to expand/reduce volume. VolumeResizeSuccessful Successfully expanded/reduced volume. FileSystemResizeFailed Failed to expand/reduce file system. FileSystemResizeSuccessful Successfully expanded/reduced file system. FailedUnMount Failed to unmount volume. FailedMapVolume Failed to map a volume. FailedUnmapDevice Failed unmaped device. AlreadyMountedVolume Volume is already mounted. SuccessfulDetachVolume Volume is successfully detached. SuccessfulMountVolume Volume is successfully mounted. SuccessfulUnMountVolume Volume is successfully unmounted. ContainerGCFailed Container garbage collection failed. ImageGCFailed Image garbage collection failed. FailedNodeAllocatableEnforcement Failed to enforce System Reserved Cgroup limit. NodeAllocatableEnforced Enforced System Reserved Cgroup limit. UnsupportedMountOption Unsupported mount option. SandboxChanged Pod sandbox changed. FailedCreatePodSandBox Failed to create pod sandbox. FailedPodSandBoxStatus Failed pod sandbox status. Table 8.7. Pod worker events Name Description FailedSync Pod sync failed. Table 8.8. System Events Name Description SystemOOM There is an OOM (out of memory) situation on the cluster. Table 8.9. Pod events Name Description FailedKillPod Failed to stop a pod. FailedCreatePodContainer Failed to create a pod container. Failed Failed to make pod data directories. NetworkNotReady Network is not ready. FailedCreate Error creating: <error-msg> . SuccessfulCreate Created pod: <pod-name> . FailedDelete Error deleting: <error-msg> . SuccessfulDelete Deleted pod: <pod-id> . Table 8.10. Horizontal Pod AutoScaler events Name Description SelectorRequired Selector is required. InvalidSelector Could not convert selector into a corresponding internal selector object. FailedGetObjectMetric HPA was unable to compute the replica count. InvalidMetricSourceType Unknown metric source type. ValidMetricFound HPA was able to successfully calculate a replica count. FailedConvertHPA Failed to convert the given HPA. FailedGetScale HPA controller was unable to get the target's current scale. SucceededGetScale HPA controller was able to get the target's current scale. FailedComputeMetricsReplicas Failed to compute desired number of replicas based on listed metrics. FailedRescale New size: <size> ; reason: <msg> ; error: <error-msg> . SuccessfulRescale New size: <size> ; reason: <msg> . FailedUpdateStatus Failed to update status. Table 8.11. Network events (openshift-sdn) Name Description Starting Starting OpenShift SDN. NetworkFailed The pod's network interface has been lost and the pod will be stopped. Table 8.12. Network events (kube-proxy) Name Description NeedPods The service-port <serviceName>:<port> needs pods. Table 8.13. Volume events Name Description FailedBinding There are no persistent volumes available and no storage class is set. VolumeMismatch Volume size or class is different from what is requested in claim. VolumeFailedRecycle Error creating recycler pod. VolumeRecycled Occurs when volume is recycled. RecyclerPod Occurs when pod is recycled. VolumeDelete Occurs when volume is deleted. VolumeFailedDelete Error when deleting the volume. ExternalProvisioning Occurs when volume for the claim is provisioned either manually or via external software. ProvisioningFailed Failed to provision volume. ProvisioningCleanupFailed Error cleaning provisioned volume. ProvisioningSucceeded Occurs when the volume is provisioned successfully. WaitForFirstConsumer Delay binding until pod scheduling. Table 8.14. Lifecycle hooks Name Description FailedPostStartHook Handler failed for pod start. FailedPreStopHook Handler failed for pre-stop. UnfinishedPreStopHook Pre-stop hook unfinished. Table 8.15. Deployments Name Description DeploymentCancellationFailed Failed to cancel deployment. DeploymentCancelled Canceled deployment. DeploymentCreated Created new replication controller. IngressIPRangeFull No available Ingress IP to allocate to service. Table 8.16. Scheduler events Name Description FailedScheduling Failed to schedule pod: <pod-namespace>/<pod-name> . This event is raised for multiple reasons, for example: AssumePodVolumes failed, Binding rejected etc. Preempted By <preemptor-namespace>/<preemptor-name> on node <node-name> . Scheduled Successfully assigned <pod-name> to <node-name> . Table 8.17. Daemon set events Name Description SelectingAll This daemon set is selecting all pods. A non-empty selector is required. FailedPlacement Failed to place pod on <node-name> . FailedDaemonPod Found failed daemon pod <pod-name> on node <node-name> , will try to kill it. Table 8.18. LoadBalancer service events Name Description CreatingLoadBalancerFailed Error creating load balancer. DeletingLoadBalancer Deleting load balancer. EnsuringLoadBalancer Ensuring load balancer. EnsuredLoadBalancer Ensured load balancer. UnAvailableLoadBalancer There are no available nodes for LoadBalancer service. LoadBalancerSourceRanges Lists the new LoadBalancerSourceRanges . For example, <old-source-range> <new-source-range> . LoadbalancerIP Lists the new IP address. For example, <old-ip> <new-ip> . ExternalIP Lists external IP address. For example, Added: <external-ip> . UID Lists the new UID. For example, <old-service-uid> <new-service-uid> . ExternalTrafficPolicy Lists the new ExternalTrafficPolicy . For example, <old-policy> <new-policy> . HealthCheckNodePort Lists the new HealthCheckNodePort . For example, <old-node-port> new-node-port> . UpdatedLoadBalancer Updated load balancer with new hosts. LoadBalancerUpdateFailed Error updating load balancer with new hosts. DeletingLoadBalancer Deleting load balancer. DeletingLoadBalancerFailed Error deleting load balancer. DeletedLoadBalancer Deleted load balancer. 8.2. Estimating the number of pods your OpenShift Container Platform nodes can hold As a cluster administrator, you can use the OpenShift Cluster Capacity Tool to view the number of pods that can be scheduled to increase the current resources before they become exhausted, and to ensure any future pods can be scheduled. This capacity comes from an individual node host in a cluster, and includes CPU, memory, disk space, and others. 8.2.1. Understanding the OpenShift Cluster Capacity Tool The OpenShift Cluster Capacity Tool simulates a sequence of scheduling decisions to determine how many instances of an input pod can be scheduled on the cluster before it is exhausted of resources to provide a more accurate estimation. Note The remaining allocatable capacity is a rough estimation, because it does not count all of the resources being distributed among nodes. It analyzes only the remaining resources and estimates the available capacity that is still consumable in terms of a number of instances of a pod with given requirements that can be scheduled in a cluster. Also, pods might only have scheduling support on particular sets of nodes based on its selection and affinity criteria. As a result, the estimation of which remaining pods a cluster can schedule can be difficult. You can run the OpenShift Cluster Capacity Tool as a stand-alone utility from the command line, or as a job in a pod inside an OpenShift Container Platform cluster. Running the tool as job inside of a pod enables you to run it multiple times without intervention. 8.2.2. Running the OpenShift Cluster Capacity Tool on the command line You can run the OpenShift Cluster Capacity Tool from the command line to estimate the number of pods that can be scheduled onto your cluster. You create a sample pod spec file, which the tool uses for estimating resource usage. The pod spec specifies its resource requirements as limits or requests . The cluster capacity tool takes the pod's resource requirements into account for its estimation analysis. Prerequisites Run the OpenShift Cluster Capacity Tool , which is available as a container image from the Red Hat Ecosystem Catalog. Create a sample pod spec file: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] Create the cluster role: USD oc create -f <file_name>.yaml For example: USD oc create -f pod-spec.yaml Procedure To use the cluster capacity tool on the command line: From the terminal, log in to the Red Hat Registry: USD podman login registry.redhat.io Pull the cluster capacity tool image: USD podman pull registry.redhat.io/openshift4/ose-cluster-capacity Run the cluster capacity tool: USD podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity \ /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml \ --verbose where: <pod_spec>.yaml Specifies the pod spec to use. verbose Outputs a detailed description of how many pods can be scheduled on each node in the cluster. Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s) In the above example, the number of estimated pods that can be scheduled onto the cluster is 88. 8.2.3. Running the OpenShift Cluster Capacity Tool as a job inside a pod Running the OpenShift Cluster Capacity Tool as a job inside of a pod allows you to run the tool multiple times without needing user intervention. You run the OpenShift Cluster Capacity Tool as a job by using a ConfigMap object. Prerequisites Download and install OpenShift Cluster Capacity Tool . Procedure To run the cluster capacity tool: Create the cluster role: Create a YAML file similar to the following: kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [""] resources: ["pods", "nodes", "persistentvolumeclaims", "persistentvolumes", "services", "replicationcontrollers"] verbs: ["get", "watch", "list"] - apiGroups: ["apps"] resources: ["replicasets", "statefulsets"] verbs: ["get", "watch", "list"] - apiGroups: ["policy"] resources: ["poddisruptionbudgets"] verbs: ["get", "watch", "list"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "watch", "list"] Create the cluster role by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create sa cluster-capacity-sa Create the service account: USD oc create sa cluster-capacity-sa -n default Add the role to the service account: USD oc adm policy add-cluster-role-to-user cluster-capacity-role \ system:serviceaccount:<namespace>:cluster-capacity-sa where: <namespace> Specifies the namespace where the pod is located. Define and create the pod spec: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] Create the pod by running the following command: USD oc create -f <file_name>.yaml For example: USD oc create -f pod.yaml Created a config map object by running the following command: USD oc create configmap cluster-capacity-configmap \ --from-file=pod.yaml=pod.yaml The cluster capacity analysis is mounted in a volume using a config map object named cluster-capacity-configmap to mount the input pod spec file pod.yaml into a volume test-volume at the path /test-pod . Create the job using the below example of a job specification file: Create a YAML file similar to the following: apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: "Always" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: "true" command: - "/bin/sh" - "-ec" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: "Never" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap 1 A required environment variable letting the cluster capacity tool know that it is running inside a cluster as a pod. The pod.yaml key of the ConfigMap object is the same as the Pod spec file name, though it is not required. By doing this, the input pod spec file can be accessed inside the pod as /test-pod/pod.yaml . Run the cluster capacity image as a job in a pod by running the following command: USD oc create -f cluster-capacity-job.yaml Verification Check the job logs to find the number of pods that can be scheduled in the cluster: USD oc logs jobs/cluster-capacity-job Example output small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s) 8.3. Restrict resource consumption with limit ranges By default, containers run with unbounded compute resources on an OpenShift Container Platform cluster. With limit ranges, you can restrict resource consumption for specific objects in a project: pods and containers: You can set minimum and maximum requirements for CPU and memory for pods and their containers. Image streams: You can set limits on the number of images and tags in an ImageStream object. Images: You can limit the size of images that can be pushed to an internal registry. Persistent volume claims (PVC): You can restrict the size of the PVCs that can be requested. If a pod does not meet the constraints imposed by the limit range, the pod cannot be created in the namespace. 8.3.1. About limit ranges A limit range, defined by a LimitRange object, restricts resource consumption in a project. In the project you can set specific resource limits for a pod, container, image, image stream, or persistent volume claim (PVC). All requests to create and modify resources are evaluated against each LimitRange object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. The following shows a limit range object for all components: pod, container, image, image stream, or PVC. You can configure limits for any or all of these components in the same object. You create a different limit range object for each project where you want to control resources. Sample limit range object for a container apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" spec: limits: - type: "Container" max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: cpu: "300m" memory: "200Mi" defaultRequest: cpu: "200m" memory: "100Mi" maxLimitRequestRatio: cpu: "10" 8.3.1.1. About component limits The following examples show limit range parameters for each component. The examples are broken out for clarity. You can create a single LimitRange object for any or all components as necessary. 8.3.1.1.1. Container limits A limit range allows you to specify the minimum and maximum CPU and memory that each container in a pod can request for a specific project. If a container is created in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. The container CPU or memory request and limit must be greater than or equal to the min resource constraint for containers that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraint for containers that are specified in the LimitRange object. If the LimitRange object defines a max CPU, you do not need to define a CPU request value in the Pod spec. But you must specify a CPU limit value that satisfies the maximum CPU constraint specified in the limit range. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio value for containers that is specified in the LimitRange object. If the LimitRange object defines a maxLimitRequestRatio constraint, any new containers must have both a request and a limit value. OpenShift Container Platform calculates the limit-to-request ratio by dividing the limit by the request . This value should be a non-negative integer greater than 1. For example, if a container has cpu: 500 in the limit value, and cpu: 100 in the request value, the limit-to-request ratio for cpu is 5 . This ratio must be less than or equal to the maxLimitRequestRatio . If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Container LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Container" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "100m" 4 memory: "4Mi" 5 default: cpu: "300m" 6 memory: "200Mi" 7 defaultRequest: cpu: "200m" 8 memory: "100Mi" 9 maxLimitRequestRatio: cpu: "10" 10 1 The name of the LimitRange object. 2 The maximum amount of CPU that a single container in a pod can request. 3 The maximum amount of memory that a single container in a pod can request. 4 The minimum amount of CPU that a single container in a pod can request. 5 The minimum amount of memory that a single container in a pod can request. 6 The default amount of CPU that a container can use if not specified in the Pod spec. 7 The default amount of memory that a container can use if not specified in the Pod spec. 8 The default amount of CPU that a container can request if not specified in the Pod spec. 9 The default amount of memory that a container can request if not specified in the Pod spec. 10 The maximum limit-to-request ratio for a container. 8.3.1.1.2. Pod limits A limit range allows you to specify the minimum and maximum CPU and memory limits for all containers across a pod in a given project. To create a container in the project, the container CPU and memory requests in the Pod spec must comply with the values set in the LimitRange object. If not, the pod does not get created. If the Pod spec does not specify a container resource memory or limit, the default or defaultRequest CPU and memory values for containers specified in the limit range object are assigned to the container. Across all containers in a pod, the following must hold true: The container CPU or memory request and limit must be greater than or equal to the min resource constraints for pods that are specified in the LimitRange object. The container CPU or memory request and limit must be less than or equal to the max resource constraints for pods that are specified in the LimitRange object. The ratio of the container limits to requests must be less than or equal to the maxLimitRequestRatio constraint specified in the LimitRange object. Pod LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" max: cpu: "2" 2 memory: "1Gi" 3 min: cpu: "200m" 4 memory: "6Mi" 5 maxLimitRequestRatio: cpu: "10" 6 1 The name of the limit range object. 2 The maximum amount of CPU that a pod can request across all containers. 3 The maximum amount of memory that a pod can request across all containers. 4 The minimum amount of CPU that a pod can request across all containers. 5 The minimum amount of memory that a pod can request across all containers. 6 The maximum limit-to-request ratio for a container. 8.3.1.1.3. Image limits A LimitRange object allows you to specify the maximum size of an image that can be pushed to an OpenShift image registry. When pushing images to an OpenShift image registry, the following must hold true: The size of the image must be less than or equal to the max size for images that is specified in the LimitRange object. Image LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2 1 The name of the LimitRange object. 2 The maximum size of an image that can be pushed to an OpenShift image registry. Note To prevent blobs that exceed the limit from being uploaded to the registry, the registry must be configured to enforce quotas. Warning The image size is not always available in the manifest of an uploaded image. This is especially the case for images built with Docker 1.10 or higher and pushed to a v2 registry. If such an image is pulled with an older Docker daemon, the image manifest is converted by the registry to schema v1 lacking all the size information. No storage limit set on images prevent it from being uploaded. The issue is being addressed. 8.3.1.1.4. Image stream limits A LimitRange object allows you to specify limits for image streams. For each image stream, the following must hold true: The number of image tags in an ImageStream specification must be less than or equal to the openshift.io/image-tags constraint in the LimitRange object. The number of unique references to images in an ImageStream specification must be less than or equal to the openshift.io/images constraint in the limit range object. Imagestream LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3 1 The name of the LimitRange object. 2 The maximum number of unique image tags in the imagestream.spec.tags parameter in imagestream spec. 3 The maximum number of unique image references in the imagestream.status.tags parameter in the imagestream spec. The openshift.io/image-tags resource represents unique image references. Possible references are an ImageStreamTag , an ImageStreamImage and a DockerImage . Tags can be created using the oc tag and oc import-image commands. No distinction is made between internal and external references. However, each unique reference tagged in an ImageStream specification is counted just once. It does not restrict pushes to an internal container image registry in any way, but is useful for tag restriction. The openshift.io/images resource represents unique image names recorded in image stream status. It allows for restriction of a number of images that can be pushed to the OpenShift image registry. Internal and external references are not distinguished. 8.3.1.1.5. Persistent volume claim limits A LimitRange object allows you to restrict the storage requested in a persistent volume claim (PVC). Across all persistent volume claims in a project, the following must hold true: The resource request in a persistent volume claim (PVC) must be greater than or equal the min constraint for PVCs that is specified in the LimitRange object. The resource request in a persistent volume claim (PVC) must be less than or equal the max constraint for PVCs that is specified in the LimitRange object. PVC LimitRange object definition apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "PersistentVolumeClaim" min: storage: "2Gi" 2 max: storage: "50Gi" 3 1 The name of the LimitRange object. 2 The minimum amount of storage that can be requested in a persistent volume claim. 3 The maximum amount of storage that can be requested in a persistent volume claim. 8.3.2. Creating a Limit Range To apply a limit range to a project: Create a LimitRange object with your required specifications: apiVersion: "v1" kind: "LimitRange" metadata: name: "resource-limits" 1 spec: limits: - type: "Pod" 2 max: cpu: "2" memory: "1Gi" min: cpu: "200m" memory: "6Mi" - type: "Container" 3 max: cpu: "2" memory: "1Gi" min: cpu: "100m" memory: "4Mi" default: 4 cpu: "300m" memory: "200Mi" defaultRequest: 5 cpu: "200m" memory: "100Mi" maxLimitRequestRatio: 6 cpu: "10" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: "PersistentVolumeClaim" 9 min: storage: "2Gi" max: storage: "50Gi" 1 Specify a name for the LimitRange object. 2 To set limits for a pod, specify the minimum and maximum CPU and memory requests as needed. 3 To set limits for a container, specify the minimum and maximum CPU and memory requests as needed. 4 Optional. For a container, specify the default amount of CPU or memory that a container can use, if not specified in the Pod spec. 5 Optional. For a container, specify the default amount of CPU or memory that a container can request, if not specified in the Pod spec. 6 Optional. For a container, specify the maximum limit-to-request ratio that can be specified in the Pod spec. 7 To set limits for an Image object, set the maximum size of an image that can be pushed to an OpenShift image registry. 8 To set limits for an image stream, set the maximum number of image tags and references that can be in the ImageStream object file, as needed. 9 To set limits for a persistent volume claim, set the minimum and maximum amount of storage that can be requested. Create the object: USD oc create -f <limit_range_file> -n <project> 1 1 Specify the name of the YAML file you created and the project where you want the limits to apply. 8.3.3. Viewing a limit You can view any limits defined in a project by navigating in the web console to the project's Quota page. You can also use the CLI to view limit range details: Get the list of LimitRange object defined in the project. For example, for a project called demoproject : USD oc get limits -n demoproject NAME CREATED AT resource-limits 2020-07-15T17:14:23Z Describe the LimitRange object you are interested in, for example the resource-limits limit range: USD oc describe limits resource-limits -n demoproject Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - - 8.3.4. Deleting a Limit Range To remove any active LimitRange object to no longer enforce the limits in a project: Run the following command: USD oc delete limits <limit_name> 8.4. Configuring cluster memory to meet container memory and risk requirements As a cluster administrator, you can help your clusters operate efficiently through managing application memory by: Determining the memory and risk requirements of a containerized application component and configuring the container memory parameters to suit those requirements. Configuring containerized application runtimes (for example, OpenJDK) to adhere optimally to the configured container memory parameters. Diagnosing and resolving memory-related error conditions associated with running in a container. 8.4.1. Understanding managing application memory It is recommended to fully read the overview of how OpenShift Container Platform manages Compute Resources before proceeding. For each kind of resource (memory, CPU, storage), OpenShift Container Platform allows optional request and limit values to be placed on each container in a pod. Note the following about memory requests and memory limits: Memory request The memory request value, if specified, influences the OpenShift Container Platform scheduler. The scheduler considers the memory request when scheduling a container to a node, then fences off the requested memory on the chosen node for the use of the container. If a node's memory is exhausted, OpenShift Container Platform prioritizes evicting its containers whose memory usage most exceeds their memory request. In serious cases of memory exhaustion, the node OOM killer may select and kill a process in a container based on a similar metric. The cluster administrator can assign quota or assign default values for the memory request value. The cluster administrator can override the memory request values that a developer specifies, to manage cluster overcommit. Memory limit The memory limit value, if specified, provides a hard limit on the memory that can be allocated across all the processes in a container. If the memory allocated by all of the processes in a container exceeds the memory limit, the node Out of Memory (OOM) killer will immediately select and kill a process in the container. If both memory request and limit are specified, the memory limit value must be greater than or equal to the memory request. The cluster administrator can assign quota or assign default values for the memory limit value. The minimum memory limit is 12 MB. If a container fails to start due to a Cannot allocate memory pod event, the memory limit is too low. Either increase or remove the memory limit. Removing the limit allows pods to consume unbounded node resources. 8.4.1.1. Managing application memory strategy The steps for sizing application memory on OpenShift Container Platform are as follows: Determine expected container memory usage Determine expected mean and peak container memory usage, empirically if necessary (for example, by separate load testing). Remember to consider all the processes that may potentially run in parallel in the container: for example, does the main application spawn any ancillary scripts? Determine risk appetite Determine risk appetite for eviction. If the risk appetite is low, the container should request memory according to the expected peak usage plus a percentage safety margin. If the risk appetite is higher, it may be more appropriate to request memory according to the expected mean usage. Set container memory request Set container memory request based on the above. The more accurately the request represents the application memory usage, the better. If the request is too high, cluster and quota usage will be inefficient. If the request is too low, the chances of application eviction increase. Set container memory limit, if required Set container memory limit, if required. Setting a limit has the effect of immediately killing a container process if the combined memory usage of all processes in the container exceeds the limit, and is therefore a mixed blessing. On the one hand, it may make unanticipated excess memory usage obvious early ("fail fast"); on the other hand it also terminates processes abruptly. Note that some OpenShift Container Platform clusters may require a limit value to be set; some may override the request based on the limit; and some application images rely on a limit value being set as this is easier to detect than a request value. If the memory limit is set, it should not be set to less than the expected peak container memory usage plus a percentage safety margin. Ensure application is tuned Ensure application is tuned with respect to configured request and limit values, if appropriate. This step is particularly relevant to applications which pool memory, such as the JVM. The rest of this page discusses this. Additional resources Understanding compute resources and containers 8.4.2. Understanding OpenJDK settings for OpenShift Container Platform The default OpenJDK settings do not work well with containerized environments. As a result, some additional Java memory settings must always be provided whenever running the OpenJDK in a container. The JVM memory layout is complex, version dependent, and describing it in detail is beyond the scope of this documentation. However, as a starting point for running OpenJDK in a container, at least the following three memory-related tasks are key: Overriding the JVM maximum heap size. Encouraging the JVM to release unused memory to the operating system, if appropriate. Ensuring all JVM processes within a container are appropriately configured. Optimally tuning JVM workloads for running in a container is beyond the scope of this documentation, and may involve setting multiple additional JVM options. 8.4.2.1. Understanding how to override the JVM maximum heap size For many Java workloads, the JVM heap is the largest single consumer of memory. Currently, the OpenJDK defaults to allowing up to 1/4 (1/ -XX:MaxRAMFraction ) of the compute node's memory to be used for the heap, regardless of whether the OpenJDK is running in a container or not. It is therefore essential to override this behavior, especially if a container memory limit is also set. There are at least two ways the above can be achieved: If the container memory limit is set and the experimental options are supported by the JVM, set -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap . Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This sets -XX:MaxRAM to the container memory limit, and the maximum heap size ( -XX:MaxHeapSize / -Xmx ) to 1/ -XX:MaxRAMFraction (1/4 by default). Directly override one of -XX:MaxRAM , -XX:MaxHeapSize or -Xmx . This option involves hard-coding a value, but has the advantage of allowing a safety margin to be calculated. 8.4.2.2. Understanding how to encourage the JVM to release unused memory to the operating system By default, the OpenJDK does not aggressively return unused memory to the operating system. This may be appropriate for many containerized Java workloads, but notable exceptions include workloads where additional active processes co-exist with a JVM within a container, whether those additional processes are native, additional JVMs, or a combination of the two. Java-based agents can use the following JVM arguments to encourage the JVM to release unused memory to the operating system: -XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90. These arguments are intended to return heap memory to the operating system whenever allocated memory exceeds 110% of in-use memory ( -XX:MaxHeapFreeRatio ), spending up to 20% of CPU time in the garbage collector ( -XX:GCTimeRatio ). At no time will the application heap allocation be less than the initial heap allocation (overridden by -XX:InitialHeapSize / -Xms ). Detailed additional information is available Tuning Java's footprint in OpenShift (Part 1) , Tuning Java's footprint in OpenShift (Part 2) , and at OpenJDK and Containers . 8.4.2.3. Understanding how to ensure all JVM processes within a container are appropriately configured In the case that multiple JVMs run in the same container, it is essential to ensure that they are all configured appropriately. For many workloads it will be necessary to grant each JVM a percentage memory budget, leaving a perhaps substantial additional safety margin. Many Java tools use different environment variables ( JAVA_OPTS , GRADLE_OPTS , and so on) to configure their JVMs and it can be challenging to ensure that the right settings are being passed to the right JVM. The JAVA_TOOL_OPTIONS environment variable is always respected by the OpenJDK, and values specified in JAVA_TOOL_OPTIONS will be overridden by other options specified on the JVM command line. By default, to ensure that these options are used by default for all JVM workloads run in the Java-based agent image, the OpenShift Container Platform Jenkins Maven agent image sets: JAVA_TOOL_OPTIONS="-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true" Note The UseCGroupMemoryLimitForHeap option has been removed in JDK 11. Use -XX:+UseContainerSupport instead. This does not guarantee that additional options are not required, but is intended to be a helpful starting point. 8.4.3. Finding the memory request and limit from within a pod An application wishing to dynamically discover its memory request and limit from within a pod should use the Downward API. Procedure Configure the pod to add the MEMORY_REQUEST and MEMORY_LIMIT stanzas: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test image: fedora:latest command: - sleep - "3600" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 Add this stanza to discover the application memory request value. 2 Add this stanza to discover the application memory limit value. Create the pod by running the following command: USD oc create -f <file-name>.yaml Verification Access the pod using a remote shell: USD oc rsh test Check that the requested values were applied: USD env | grep MEMORY | sort Example output MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184 Note The memory limit value can also be read from inside the container by the /sys/fs/cgroup/memory/memory.limit_in_bytes file. 8.4.4. Understanding OOM kill policy OpenShift Container Platform can kill a process in a container if the total memory usage of all the processes in the container exceeds the memory limit, or in serious cases of node memory exhaustion. When a process is Out of Memory (OOM) killed, this might result in the container exiting immediately. If the container PID 1 process receives the SIGKILL , the container will exit immediately. Otherwise, the container behavior is dependent on the behavior of the other processes. For example, a container process exited with code 137, indicating it received a SIGKILL signal. If the container does not exit immediately, an OOM kill is detectable as follows: Access the pod using a remote shell: # oc rsh test Run the following command to see the current OOM kill count in /sys/fs/cgroup/memory/memory.oom_control : USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 0 Run the following command to provoke an OOM kill: USD sed -e '' </dev/zero Example output Killed Run the following command to view the exit status of the sed command: USD echo USD? Example output 137 The 137 code indicates the container process exited with code 137, indicating it received a SIGKILL signal. Run the following command to see that the OOM kill counter in /sys/fs/cgroup/memory/memory.oom_control incremented: USD grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control Example output oom_kill 1 If one or more processes in a pod are OOM killed, when the pod subsequently exits, whether immediately or not, it will have phase Failed and reason OOMKilled . An OOM-killed pod might be restarted depending on the value of restartPolicy . If not restarted, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. Use the follwing command to get the pod status: USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m If the pod has not restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed If restarted, run the following command to view the pod: USD oc get pod test -o yaml Example output ... status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running 8.4.5. Understanding pod eviction OpenShift Container Platform may evict a pod from its node when the node's memory is exhausted. Depending on the extent of memory exhaustion, the eviction may or may not be graceful. Graceful eviction implies the main process (PID 1) of each container receiving a SIGTERM signal, then some time later a SIGKILL signal if the process has not exited already. Non-graceful eviction implies the main process of each container immediately receiving a SIGKILL signal. An evicted pod has phase Failed and reason Evicted . It will not be restarted, regardless of the value of restartPolicy . However, controllers such as the replication controller will notice the pod's failed status and create a new pod to replace the old one. USD oc get pod test Example output NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m USD oc get pod test -o yaml Example output ... status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted 8.5. Configuring your cluster to place pods on overcommitted nodes In an overcommitted state, the sum of the container compute resource requests and limits exceeds the resources available on the system. For example, you might want to use overcommitment in development environments where a trade-off of guaranteed performance for capacity is acceptable. Containers can specify compute resource requests and limits. Requests are used for scheduling your container and provide a minimum service guarantee. Limits constrain the amount of compute resource that can be consumed on your node. The scheduler attempts to optimize the compute resource use across all nodes in your cluster. It places pods onto specific nodes, taking the pods' compute resource requests and nodes' available capacity into consideration. OpenShift Container Platform administrators can control the level of overcommit and manage container density on developer containers by using the ClusterResourceOverride Operator . Note In OpenShift Container Platform, you must enable cluster-level overcommit. Node overcommitment is enabled by default. See Disabling overcommitment for a node . 8.5.1. Resource requests and overcommitment For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node. The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service. Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 200% overcommitted. 8.5.2. Cluster-level overcommit using the Cluster Resource Override Operator The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits. The Operator modifies the ratio between the requests and limits that are set on developer containers. In conjunction with a per-project limit range that specifies limits and defaults, you can achieve the desired level of overcommit. You must install the Cluster Resource Override Operator by using the OpenShift Container Platform console or CLI as shown in the following sections. After you deploy the Cluster Resource Override Operator, the Operator modifies all new pods in specific namespaces. The Operator does not edit pods that existed before you deployed the Operator. During the installation, you create a ClusterResourceOverride custom resource (CR), where you set the level of overcommit, as shown in the following example: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50. 3 Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25. 4 Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200. Note The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange object with default limits per individual project or configure limits in Pod specs for the overrides to apply. When configured, you can enable overrides on a per-project basis by applying the following label to the Namespace object for each project where you want the overrides to apply. For example, you can configure override so that infrastructure components are not subject to the overrides. apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" # ... The Operator watches for the ClusterResourceOverride CR and ensures that the ClusterResourceOverride admission webhook is installed into the same namespace as the operator. For example, a pod has the following resources limits: apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace # ... spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: "512Mi" cpu: "2000m" # ... The Cluster Resource Override Operator intercepts the original pod request, then overrides the resources according to the configuration set in the ClusterResourceOverride object. apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace # ... spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: "1" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi # ... 1 The CPU limit has been overridden to 1 because the limitCPUToMemoryPercent parameter is set to 200 in the ClusterResourceOverride object. As such, 200% of the memory limit, 512Mi in CPU terms, is 1 CPU core. 2 The CPU request is now 250m because the cpuRequestToLimit is set to 25 in the ClusterResourceOverride object. As such, 25% of the 1 CPU core is 250m. 8.5.2.1. Installing the Cluster Resource Override Operator using the web console You can use the OpenShift Container Platform web console to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, navigate to Home Projects Click Create Project . Specify clusterresourceoverride-operator as the name of the project. Click Create . Navigate to Operators OperatorHub . Choose ClusterResourceOverride Operator from the list of available Operators and click Install . On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode . Make sure clusterresourceoverride-operator is selected for Installed Namespace . Select an Update Channel and Approval Strategy . Click Install . On the Installed Operators page, click ClusterResourceOverride . On the ClusterResourceOverride Operator details page, click Create ClusterResourceOverride . On the Create ClusterResourceOverride page, click YAML view and edit the YAML template to set the overcommit values as needed: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Click Create . Check the current state of the admission webhook by checking the status of the cluster custom resource: On the ClusterResourceOverride Operator page, click cluster . On the ClusterResourceOverride Details page, click YAML . The mutatingWebhookConfigurationRef section appears when the webhook is called. apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 8.5.2.2. Installing the Cluster Resource Override Operator using the CLI You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the CLI: Create a namespace for the Cluster Resource Override Operator: Create a Namespace object YAML file (for example, cro-namespace.yaml ) for the Cluster Resource Override Operator: apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-namespace.yaml Create an Operator group: Create an OperatorGroup object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator Create the Operator Group: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-og.yaml Create a subscription: Create a Subscription object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: "stable" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-sub.yaml Create a ClusterResourceOverride custom resource (CR) object in the clusterresourceoverride-operator namespace: Change to the clusterresourceoverride-operator namespace. USD oc project clusterresourceoverride-operator Create a ClusterResourceOverride object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Create the ClusterResourceOverride object: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-cr.yaml Verify the current state of the admission webhook by checking the status of the cluster custom resource. USD oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml The mutatingWebhookConfigurationRef section appears when the webhook is called. Example output apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 8.5.2.3. Configuring cluster-level overcommit The Cluster Resource Override Operator requires a ClusterResourceOverride custom resource (CR) and a label for each project where you want the Operator to control overcommit. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To modify cluster-level overcommit: Edit the ClusterResourceOverride CR: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3 # ... 1 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 2 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 3 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit: apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" 1 # ... 1 Add this label to each project. 8.5.3. Node-level overcommit You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects. 8.5.3.1. Understanding compute resources and containers The node-enforced behavior for compute resources is specific to the resource type. 8.5.3.1.1. Understanding container CPU requests A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container. For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled. 8.5.3.1.2. Understanding container memory requests A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node's resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount. 8.5.3.2. Understanding overcommitment and quality of service classes A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity. In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class. A pod is designated as one of three QoS classes with decreasing order of priority: Table 8.19. Quality of Service Classes Priority Class Name Description 1 (highest) Guaranteed If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the pod is classified as Guaranteed . 2 Burstable If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the pod is classified as Burstable . 3 (lowest) BestEffort If requests and limits are not set for any of the resources, then the pod is classified as BestEffort . Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first: Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted. Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist. BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory. 8.5.3.2.1. Understanding how to reserve memory across quality of service tiers You can use the qos-reserved parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes. OpenShift Container Platform uses the qos-reserved parameter as follows: A value of qos-reserved=memory=100% will prevent the Burstable and BestEffort QoS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM on BestEffort and Burstable workloads in favor of increasing memory resource guarantees for Guaranteed and Burstable workloads. A value of qos-reserved=memory=50% will allow the Burstable and BestEffort QoS classes to consume half of the memory requested by a higher QoS class. A value of qos-reserved=memory=0% will allow a Burstable and BestEffort QoS classes to consume up to the full node allocatable amount if available, but increases the risk that a Guaranteed workload will not have access to requested memory. This condition effectively disables this feature. 8.5.3.3. Understanding swap memory and QOS You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement. For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed. Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure , resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event. Important If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure. 8.5.3.4. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output #... vm.overcommit_memory = 0 #... USD sysctl -a |grep panic Example output #... vm.panic_on_oom = 0 #... Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 8.5.3.5. Disabling or enforcing CPU limits using CPU CFS quotas Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel. If you disable CPU limit enforcement, it is important to understand the impact on your node: If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel. If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel. If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a disabling CPU limits apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: cpuCfsQuota: false 3 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Set the cpuCfsQuota parameter to false . Run the following command to create the CR: USD oc create -f <file_name>.yaml 8.5.3.6. Reserving resources for system processes To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory. Procedure To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes. 8.5.3.7. Disabling overcommitment for a node When enabled, overcommitment can be disabled on each node. Procedure To disable overcommitment in a node run the following command on that node: USD sysctl -w vm.overcommit_memory=0 8.5.4. Project-level limits To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed. For information on project-level resource limits, see Additional resources. Alternatively, you can disable overcommitment for specific projects. 8.5.4.1. Disabling overcommitment for a project When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment. Procedure To disable overcommitment in a project: Create or edit the namespace object file. Add the following annotation: apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: "false" 1 # ... 1 Setting this annotation to false disables overcommit for this namespace. 8.5.5. Additional resources Setting deployment resources . Allocating resources for nodes . 8.6. Configuring the Linux cgroup version on your nodes As of OpenShift Container Platform 4.14, OpenShift Container Platform uses Linux control group version 2 (cgroup v2) in your cluster. If you are using cgroup v1 on OpenShift Container Platform 4.13 or earlier, migrating to OpenShift Container Platform 4.14 or later will not automatically update your cgroup configuration to version 2. A fresh installation of OpenShift Container Platform 4.14 or later will use cgroup v2 by default. However, you can enable Linux control group version 1 (cgroup v1) upon installation. cgroup v2 is the current version of the Linux cgroup API. cgroup v2 offers several improvements over cgroup v1, including a unified hierarchy, safer sub-tree delegation, new features such as Pressure Stall Information , and enhanced resource management and isolation. However, cgroup v2 has different CPU, memory, and I/O management characteristics than cgroup v1. Therefore, some workloads might experience slight differences in memory or CPU usage on clusters that run cgroup v2. You can change between cgroup v1 and cgroup v2, as needed. Enabling cgroup v1 in OpenShift Container Platform disables all cgroup v2 controllers and hierarchies in your cluster. Note If you run third-party monitoring and security agents that depend on the cgroup file system, update the agents to a version that supports cgroup v2. If you have configured cgroup v2 and run cAdvisor as a stand-alone daemon set for monitoring pods and containers, update cAdvisor to v0.43.0 or later. If you deploy Java applications, use versions that fully support cgroup v2, such as the following packages: OpenJDK / HotSpot: jdk8u372, 11.0.16, 15 and later NodeJs 20.3.0 or later IBM Semeru Runtimes: jdk8u345-b01, 11.0.16.0, 17.0.4.0, 18.0.2.0 and later IBM SDK Java Technology Edition Version (IBM Java): 8.0.7.15 and later 8.6.1. Configuring Linux cgroup You can enable Linux control group version 1 (cgroup v1) or Linux control group version 2 (cgroup v2) by editing the node.config object. The default is cgroup v2. Note In Telco, clusters using PerformanceProfile for low latency, real-time, and Data Plane Development Kit (DPDK) workloads automatically revert to cgroups v1 due to the lack of cgroups v2 support. Enabling cgroup v2 is not supported if you are using PerformanceProfile . Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.12 or later. You are logged in to the cluster as a user with administrative privileges. Procedure Enable cgroup v1 on nodes: Edit the node.config object: USD oc edit nodes.config/cluster Edit the spec.cgroupMode parameter: Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: "v1" 1 ... 1 Specify v1 to enable cgroup v1 or v2 for cgroup v2. Verification Check the machine configs to see that the new machine configs were added: USD oc get mc Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 New machine configs are created, as expected. Check that the new kernelArguments were added to the new machine configs: USD oc describe mc <name> Example output for cgroup v2 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd_unified_cgroup_hierarchy=1 1 cgroup_no_v1="all" 2 psi=0 1 Enables cgroup v2 in systemd. 2 Disables cgroup v1. Example output for cgroup v1 apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd.unified_cgroup_hierarchy=0 1 systemd.legacy_systemd_cgroup_controller=1 2 psi=1 3 1 Disables cgroup v2. 2 Enables cgroup v1 in systemd. 3 Enables the Linux Pressure Stall Information (PSI) feature. Check the nodes to see that scheduling on the nodes is disabled. This indicates that the change is being applied: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.28.5 After a node returns to the Ready state, start a debug session for that node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host Check that the sys/fs/cgroup/cgroup2fs or sys/fs/cgroup/tmpfs file is present on your nodes: USD stat -c %T -f /sys/fs/cgroup Example output for cgroup v2 cgroup2fs Example output for cgroup v1 tmpfs Additional resources OpenShift Container Platform installation overview 8.7. Enabling features using feature gates As an administrator, you can use feature gates to enable features that are not part of the default set of features. 8.7.1. Understanding feature gates You can use the FeatureGate custom resource (CR) to enable specific feature sets in your cluster. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. You can activate the following feature set by using the FeatureGate CR: TechPreviewNoUpgrade . This feature set is a subset of the current Technology Preview features. This feature set allows you to enable these Technology Preview features on test clusters, where you can fully test them, while leaving the features disabled on production clusters. Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. The following Technology Preview features are enabled by this feature set: External cloud providers. Enables support for external cloud providers for clusters on vSphere, AWS, Azure, and GCP. Support for OpenStack is GA. This is an internal feature that most users do not need to interact with. ( ExternalCloudProvider ) Shared Resources CSI Driver in OpenShift Builds. Enables the Container Storage Interface (CSI). ( CSIDriverSharedResource ) Swap memory on nodes. Enables swap memory use for OpenShift Container Platform workloads on a per-node basis. ( NodeSwap ) OpenStack Machine API Provider. This gate has no effect and is planned to be removed from this feature set in a future release. ( MachineAPIProviderOpenStack ) Insights Operator. Enables the InsightsDataGather CRD, which allows users to configure some Insights data gathering options. The feature set also enables the DataGather CRD, which allows users to run Insights data gathering on-demand. ( InsightsConfigAPI ) Retroactive Default Storage Class. Enables OpenShift Container Platform to retroactively assign the default storage class to PVCs if there was no default storage class when the PVC was created.( RetroactiveDefaultStorageClass ) Dynamic Resource Allocation API. Enables a new API for requesting and sharing resources between pods and containers. This is an internal feature that most users do not need to interact with. ( DynamicResourceAllocation ) Pod security admission enforcement. Enables the restricted enforcement mode for pod security admission. Instead of only logging a warning, pods are rejected if they violate pod security standards. ( OpenShiftPodSecurityAdmission ) StatefulSet pod availability upgrading limits. Enables users to define the maximum number of statefulset pods unavailable during updates which reduces application downtime. ( MaxUnavailableStatefulSet ) Admin Network Policy and Baseline Admin Network Policy. Enables AdminNetworkPolicy and BaselineAdminNetworkPolicy resources, which are part of the Network Policy V2 API, in clusters running the OVN-Kubernetes CNI plugin. Cluster administrators can apply cluster-scoped policies and safeguards for an entire cluster before namespaces are created. Network administrators can secure clusters by enforcing network traffic controls that cannot be overridden by users. Network administrators can enforce optional baseline network traffic controls that can be overridden by users in the cluster, if necessary. Currently, these APIs support only expressing policies for intra-cluster traffic. ( AdminNetworkPolicy ) MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. ( admissionWebhookMatchConditions ) Gateway API. To enable the OpenShift Container Platform Gateway API, set the value of the enabled field to true in the techPreview.gatewayAPI specification of the ServiceMeshControlPlane resource.( gateGatewayAPI ) gcpLabelsTags vSphereStaticIPs routeExternalCertificate automatedEtcdBackup gcpClusterHostedDNS vSphereControlPlaneMachineset dnsNameResolver machineConfigNodes metricsServer installAlternateInfrastructureAWS sdnLiveMigration mixedCPUsAllocation managedBootImages onClusterBuild signatureStores For more information about the features activated by the TechPreviewNoUpgrade feature gate, see the following topics: Shared Resources CSI Driver and Build CSI Volumes in OpenShift Builds CSI inline ephemeral volumes Swap memory on nodes Managing machines with the Cluster API Disabling the Insights Operator gather operations Enabling the Insights Operator gather operations Running an Insights Operator gather operation Managing the default storage class Pod security admission enforcement . 8.7.2. Enabling feature sets at installation You can enable feature sets for all nodes in the cluster by editing the install-config.yaml file before you deploy the cluster. Prerequisites You have an install-config.yaml file. Procedure Use the featureSet parameter to specify the name of the feature set you want to enable, such as TechPreviewNoUpgrade : Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample install-config.yaml file with an enabled feature set compute: - hyperthreading: Enabled name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 metadataService: authentication: Optional type: c5.4xlarge zones: - us-west-2c replicas: 3 featureSet: TechPreviewNoUpgrade Save the file and reference it when using the installation program to deploy the cluster. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 8.7.3. Enabling feature sets using the web console You can use the OpenShift Container Platform web console to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Procedure To enable feature sets: In the OpenShift Container Platform web console, switch to the Administration Custom Resource Definitions page. On the Custom Resource Definitions page, click FeatureGate . On the Custom Resource Definition Details page, click the Instances tab. Click the cluster feature gate, then click the YAML tab. Edit the cluster instance to add specific feature sets: Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample Feature Gate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 8.7.4. Enabling feature sets using the CLI You can use the OpenShift CLI ( oc ) to enable feature sets for all of the nodes in a cluster by editing the FeatureGate custom resource (CR). Prerequisites You have installed the OpenShift CLI ( oc ). Procedure To enable feature sets: Edit the FeatureGate CR named cluster : USD oc edit featuregate cluster Warning Enabling the TechPreviewNoUpgrade feature set on your cluster cannot be undone and prevents minor version updates. You should not enable this feature set on production clusters. Sample FeatureGate custom resource apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 # ... spec: featureSet: TechPreviewNoUpgrade 2 1 The name of the FeatureGate CR must be cluster . 2 Add the feature set that you want to enable: TechPreviewNoUpgrade enables specific Technology Preview features. After you save the changes, new machine configs are created, the machine config pools are updated, and scheduling on each node is disabled while the change is being applied. Verification You can verify that the feature gates are enabled by looking at the kubelet.conf file on a node after the nodes return to the ready state. From the Administrator perspective in the web console, navigate to Compute Nodes . Select a node. In the Node details page, click Terminal . In the terminal window, change your root directory to /host : sh-4.2# chroot /host View the kubelet.conf file: sh-4.2# cat /etc/kubernetes/kubelet.conf Sample output # ... featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false # ... The features that are listed as true are enabled on your cluster. Note The features listed vary depending upon the OpenShift Container Platform version. 8.8. Improving cluster stability in high latency environments using worker latency profiles If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator needs to change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner. The Kubelet process provides the starting point for monitoring cluster health. The Kubelet sets status values for all nodes in the OpenShift Container Platform cluster. The Kubernetes Controller Manager ( kube controller ) reads the status values every 10 seconds, by default. If the kube controller cannot read a node status value, it loses contact with that node after a configured period. The default behavior is: The node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition`Unknown`. In response, the scheduler stops scheduling pods to that node. The Node Lifecycle Controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules any pods on the node for eviction after five minutes, by default. This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet evicts pods from the node even though the node is healthy. To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal. These worker latency profiles contain three sets of parameters that are predefined with carefully tuned values to control the reaction of the cluster to increased latency. There is no need to experimentally find the best values manually. You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network. 8.8.1. Understanding worker latency profiles Worker latency profiles are four different categories of carefully-tuned parameters. The four parameters which implement these values are node-status-update-frequency , node-monitor-grace-period , default-not-ready-toleration-seconds and default-unreachable-toleration-seconds . These parameters can use values which allow you to control the reaction of the cluster to latency issues without needing to determine the best values by using manual methods. Important Setting these parameters manually is not supported. Incorrect parameter settings adversely affect cluster stability. All worker latency profiles configure the following parameters: node-status-update-frequency Specifies how often the kubelet posts node status to the API server. node-monitor-grace-period Specifies the amount of time in seconds that the Kubernetes Controller Manager waits for an update from a kubelet before marking the node unhealthy and adding the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint to the node. default-not-ready-toleration-seconds Specifies the amount of time in seconds after marking a node unhealthy that the Kube API Server Operator waits before evicting pods from that node. default-unreachable-toleration-seconds Specifies the amount of time in seconds after marking a node unreachable that the Kube API Server Operator waits before evicting pods from that node. The following Operators monitor the changes to the worker latency profiles and respond accordingly: The Machine Config Operator (MCO) updates the node-status-update-frequency parameter on the worker nodes. The Kubernetes Controller Manager updates the node-monitor-grace-period parameter on the control plane nodes. The Kubernetes API Server Operator updates the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds parameters on the control plane nodes. Although the default configuration works in most cases, OpenShift Container Platform offers two other worker latency profiles for situations where the network is experiencing higher latency than usual. The three worker latency profiles are described in the following sections: Default worker latency profile With the Default profile, each Kubelet updates it's status every 10 seconds ( node-status-update-frequency ). The Kube Controller Manager checks the statuses of Kubelet every 5 seconds ( node-monitor-grace-period ). The Kubernetes Controller Manager waits 40 seconds ( node-monitor-grace-period ) for a status update from Kubelet before considering the Kubelet unhealthy. If no status is made available to the Kubernetes Controller Manager, it then marks the node with the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint and evicts the pods on that node. If a pod is on a node that has the NoExecute taint, the pod runs according to tolerationSeconds . If the node has no taint, it will be evicted in 300 seconds ( default-not-ready-toleration-seconds and default-unreachable-toleration-seconds settings of the Kube API Server ). Profile Component Parameter Value Default kubelet node-status-update-frequency 10s Kubelet Controller Manager node-monitor-grace-period 40s Kubernetes API Server Operator default-not-ready-toleration-seconds 300s Kubernetes API Server Operator default-unreachable-toleration-seconds 300s Medium worker latency profile Use the MediumUpdateAverageReaction profile if the network latency is slightly higher than usual. The MediumUpdateAverageReaction profile reduces the frequency of kubelet updates to 20 seconds and changes the period that the Kubernetes Controller Manager waits for those updates to 2 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 2 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value MediumUpdateAverageReaction kubelet node-status-update-frequency 20s Kubelet Controller Manager node-monitor-grace-period 2m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s Low worker latency profile Use the LowUpdateSlowReaction profile if the network latency is extremely high. The LowUpdateSlowReaction profile reduces the frequency of kubelet updates to 1 minute and changes the period that the Kubernetes Controller Manager waits for those updates to 5 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 5 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value LowUpdateSlowReaction kubelet node-status-update-frequency 1m Kubelet Controller Manager node-monitor-grace-period 5m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s 8.8.2. Using and changing worker latency profiles To change a worker latency profile to deal with network latency, edit the node.config object to add the name of the profile. You can change the profile at any time as latency increases or decreases. You must move one worker latency profile at a time. For example, you cannot move directly from the Default profile to the LowUpdateSlowReaction worker latency profile. You must move from the Default worker latency profile to the MediumUpdateAverageReaction profile first, then to LowUpdateSlowReaction . Similarly, when returning to the Default profile, you must move from the low profile to the medium profile first, then to Default . Note You can also configure worker latency profiles upon installing an OpenShift Container Platform cluster. Procedure To move from the default worker latency profile: Move to the medium worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Add spec.workerLatencyProfile: MediumUpdateAverageReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1 # ... 1 Specifies the medium worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Optional: Move to the low worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Change the spec.workerLatencyProfile value to LowUpdateSlowReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1 # ... 1 Specifies use of the low worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Verification When all nodes return to the Ready condition, you can use the following command to look in the Kubernetes Controller Manager to ensure it was applied: USD oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5 Example output # ... - lastTransitionTime: "2022-07-11T19:47:10Z" reason: ProfileUpdated status: "False" type: WorkerLatencyProfileProgressing - lastTransitionTime: "2022-07-11T19:47:10Z" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: "True" type: WorkerLatencyProfileComplete - lastTransitionTime: "2022-07-11T19:20:11Z" reason: AsExpected status: "False" type: WorkerLatencyProfileDegraded - lastTransitionTime: "2022-07-11T19:20:36Z" status: "False" # ... 1 Specifies that the profile is applied and active. To change the medium profile to default or change the default to medium, edit the node.config object and set the spec.workerLatencyProfile parameter to the appropriate value.
[ "oc get events [-n <project>] 1", "oc get events -n openshift-config", "LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"openshift-sdn\": cannot set \"openshift-sdn\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc create -f pod-spec.yaml", "podman login registry.redhat.io", "podman pull registry.redhat.io/openshift4/ose-cluster-capacity", "podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml --verbose", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)", "kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"]", "oc create -f <file_name>.yaml", "oc create sa cluster-capacity-sa", "oc create sa cluster-capacity-sa -n default", "oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:<namespace>:cluster-capacity-sa", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc create -f pod.yaml", "oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml", "apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap", "oc create -f cluster-capacity-job.yaml", "oc logs jobs/cluster-capacity-job", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"", "oc create -f <limit_range_file> -n <project> 1", "oc get limits -n demoproject", "NAME CREATED AT resource-limits 2020-07-15T17:14:23Z", "oc describe limits resource-limits -n demoproject", "Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -", "oc delete limits <limit_name>", "-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.", "JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"", "apiVersion: v1 kind: Pod metadata: name: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file-name>.yaml", "oc rsh test", "env | grep MEMORY | sort", "MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184", "oc rsh test", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 0", "sed -e '' </dev/zero", "Killed", "echo USD?", "137", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 1", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m", "oc get pod test -o yaml", "status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"", "apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: \"512Mi\" cpu: \"2000m\"", "apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: \"1\" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"stable\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file-name>.yaml", "oc create -f cro-sub.yaml", "oc project clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "oc create -f <file-name>.yaml", "oc create -f cro-cr.yaml", "oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1", "sysctl -a |grep commit", "# vm.overcommit_memory = 0 #", "sysctl -a |grep panic", "# vm.panic_on_oom = 0 #", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3", "oc create -f <file_name>.yaml", "sysctl -w vm.overcommit_memory=0", "apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" 1", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: \"v1\" 1", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s", "oc describe mc <name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd_unified_cgroup_hierarchy=1 1 cgroup_no_v1=\"all\" 2 psi=0", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd.unified_cgroup_hierarchy=0 1 systemd.legacy_systemd_cgroup_controller=1 2 psi=1 3", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.28.5 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.28.5", "oc debug node/<node_name>", "sh-4.4# chroot /host", "stat -c %T -f /sys/fs/cgroup", "cgroup2fs", "tmpfs", "compute: - hyperthreading: Enabled name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 metadataService: authentication: Optional type: c5.4xlarge zones: - us-west-2c replicas: 3 featureSet: TechPreviewNoUpgrade", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1", "oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5", "- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/nodes/working-with-clusters
Chapter 8. Clair security scanner
Chapter 8. Clair security scanner Clair v4 (Clair) is an open source application that leverages static code analyses for parsing image content and reporting vulnerabilities affecting the content. Clair is packaged with Quay.io, is automatically enabled, and is managed by the Red Hat Quay development team. For Quay.io users, images are automatically indexed after they are pushed to your repository. Reports are then fetched from Clair, which matches images against its CVE's database to report security information. This process happens automatically on Quay.io, and manual recans are not required. 8.1. About Clair Clair uses Common Vulnerability Scoring System (CVSS) data from the National Vulnerability Database (NVD) to enrich vulnerability data, which is a United States government repository of security-related information, including known vulnerabilities and security issues in various software components and systems. Using scores from the NVD provides Clair the following benefits: Data synchronization . Clair can periodically synchronize its vulnerability database with the NVD. This ensures that it has the latest vulnerability data. Matching and enrichment . Clair compares the metadata and identifiers of vulnerabilities it discovers in container images with the data from the NVD. This process involves matching the unique identifiers, such as Common Vulnerabilities and Exposures (CVE) IDs, to the entries in the NVD. When a match is found, Clair can enrich its vulnerability information with additional details from NVD, such as severity scores, descriptions, and references. Severity Scores . The NVD assigns severity scores to vulnerabilities, such as the Common Vulnerability Scoring System (CVSS) score, to indicate the potential impact and risk associated with each vulnerability. By incorporating NVD's severity scores, Clair can provide more context on the seriousness of the vulnerabilities it detects. If Clair finds vulnerabilities from NVD, a detailed and standardized assessment of the severity and potential impact of vulnerabilities detected within container images is reported to users on the UI. CVSS enrichment data provides Clair the following benefits: Vulnerability prioritization . By utilizing CVSS scores, users can prioritize vulnerabilities based on their severity, helping them address the most critical issues first. Assess Risk . CVSS scores can help Clair users understand the potential risk a vulnerability poses to their containerized applications. Communicate Severity . CVSS scores provide Clair users a standardized way to communicate the severity of vulnerabilities across teams and organizations. Inform Remediation Strategies . CVSS enrichment data can guide Quay.io users in developing appropriate remediation strategies. Compliance and Reporting . Integrating CVSS data into reports generated by Clair can help organizations demonstrate their commitment to addressing security vulnerabilities and complying with industry standards and regulations. 8.1.1. Clair vulnerability databases Clair uses the following vulnerability databases to report for issues in your images: Ubuntu Oval database Debian Security Tracker Red Hat Enterprise Linux (RHEL) Oval database SUSE Oval database Oracle Oval database Alpine SecDB database VMware Photon OS database Amazon Web Services (AWS) UpdateInfo Open Source Vulnerability (OSV) Database 8.1.2. Clair supported dependencies Clair supports identifying and managing the following dependencies: Java Golang Python Ruby This means that it can analyze and report on the third-party libraries and packages that a project in these languages relies on to work correctly. When an image that contains packages from a language unsupported by Clair is pushed to your repository, a vulnerability scan cannot be performed on those packages. Users do not receive an analysis or security report for unsupported dependencies or packages. As a result, the following consequences should be considered: Security risks . Dependencies or packages that are not scanned for vulnerability might pose security risks to your organization. Compliance issues . If your organization has specific security or compliance requirements, unscanned, or partially scanned, container images might result in non-compliance with certain regulations. Note Scanned images are indexed, and a vulnerability report is created, but it might omit data from certain unsupported languages. For example, if your container image contains a Lua application, feedback from Clair is not provided because Clair does not detect it. It can detect other languages used in the container image, and shows detected CVEs for those languages. As a result, Clair images are fully scanned based on what it supported by Clair. 8.2. Viewing Clair security scans by using the UI You can view Clair security scans on the UI. Procedure Navigate to a repository and click Tags in the navigation pane. This page shows the results of the security scan. To reveal more information about multi-architecture images, click See Child Manifests to see the list of manifests in extended view. Click a relevant link under See Child Manifests , for example, 1 Unknown to be redirected to the Security Scanner page. The Security Scanner page provides information for the tag, such as which CVEs the image is susceptible to, and what remediation options you might have available. Note Image scanning only lists vulnerabilities found by Clair security scanner. What users do about the vulnerabilities are uncovered is up to said user. 8.3. Clair severity mapping Clair offers a comprehensive approach to vulnerability assessment and management. One of its essential features is the normalization of security databases' severity strings. This process streamlines the assessment of vulnerability severities by mapping them to a predefined set of values. Through this mapping, clients can efficiently react to vulnerability severities without the need to decipher the intricacies of each security database's unique severity strings. These mapped severity strings align with those found within the respective security databases, ensuring consistency and accuracy in vulnerability assessment. 8.3.1. Clair severity strings Clair alerts users with the following severity strings: Unknown Negligible Low Medium High Critical These severity strings are similar to the strings found within the relevant security database. Alpine mapping Alpine SecDB database does not provide severity information. All vulnerability severities will be Unknown. Alpine Severity Clair Severity * Unknown AWS mapping AWS UpdateInfo database provides severity information. AWS Severity Clair Severity low Low medium Medium important High critical Critical Debian mapping Debian Oval database provides severity information. Debian Severity Clair Severity * Unknown Unimportant Low Low Medium Medium High High Critical Oracle mapping Oracle Oval database provides severity information. Oracle Severity Clair Severity N/A Unknown LOW Low MODERATE Medium IMPORTANT High CRITICAL Critical RHEL mapping RHEL Oval database provides severity information. RHEL Severity Clair Severity None Unknown Low Low Moderate Medium Important High Critical Critical SUSE mapping SUSE Oval database provides severity information. Severity Clair Severity None Unknown Low Low Moderate Medium Important High Critical Critical Ubuntu mapping Ubuntu Oval database provides severity information. Severity Clair Severity Untriaged Unknown Negligible Negligible Low Low Medium Medium High High Critical Critical OSV mapping Table 8.1. CVSSv3 Base Score Clair Severity 0.0 Negligible 0.1-3.9 Low 4.0-6.9 Medium 7.0-8.9 High 9.0-10.0 Critical Table 8.2. CVSSv2 Base Score Clair Severity 0.0-3.9 Low 4.0-6.9 Medium 7.0-10 High
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/about_quay_io/clair-vulnerability-scanner
28.3. Adding a New Password Policy
28.3. Adding a New Password Policy When adding a new password policy, you must specify: a user group to which the policy will apply (see Section 28.2.2, "Global and Group-specific Password Policies" ) a priority (see Section 28.2.3, "Password Policy Priorities" ) To add a new password policy using: the web UI, see the section called "Web UI: Adding a New Password Policy" the command line, see the section called "Command Line: Adding a New Password Policy" Web UI: Adding a New Password Policy Select Policy Password Policies . Click Add . Define the user group and priority. Click Add to confirm. To configure the attributes of the new password policy, see Section 28.4, "Modifying Password Policy Attributes" . Command Line: Adding a New Password Policy Use the ipa pwpolicy-add command. Specify the user group and priority: Optional. Use the ipa pwpolicy-find command to verify that the policy has been successfully added: To configure the attributes of the new password policy, see Section 28.4, "Modifying Password Policy Attributes" .
[ "ipa pwpolicy-add Group: group_name Priority: priority_level", "ipa pwpolicy-find" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/pwd-policies-add
Chapter 4. New features
Chapter 4. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 8.8. 4.1. Installer and image creation A new and improved way to create blueprints and images in the image builder web console With this enhancement, you have access to a unified version of the image builder tool and a significant improvement in your user experience. Notable enhancements in the image builder dashboard GUI include: You can now customize your blueprints with all the customizations previously supported only in the CLI, such as kernel, file system, firewall, locale, and other customizations. You can import blueprints by either uploading or dragging the blueprint in the .JSON or .TOML format and create images from the imported blueprint. You can also export or save your blueprints in the .JSON or .TOML format. Access to a blueprint list that you can sort, filter, and is case-sensitive. With the image builder dashboard, you can now access your blueprints, images, and sources by navigating through the following tabs: Blueprint - Under the Blueprint tab, you can now import, export, or delete your blueprints. Images - Under the Images tab, you can: Download images. Download image logs. Delete images. Sources - Under the Sources tab, you can: Download images. Download image logs. Create sources for images. Delete images. Jira:RHELPLAN-139448 Support for 64-bit ARM for .vhd images built with image builder Previously, Microsoft Azure .vhd images created with the image builder tool were not supported on 64-bit ARM architectures. This update adds support for 64-bit ARM Microsoft Azure .vhd images and now you can build your .vhd images using image builder and upload them to the Microsoft Azure cloud. Jira:RHELPLAN-139424 4.2. RHEL for Edge Ability to specify user in a blueprint for simplified-installer images Previously, when creating a blueprint for a simplified-installer image, you could not specify a user in the blueprint customization, because the customization was not used and was discarded. With this update, when you create an image from the blueprint, this blueprint creates a user under the /usr/lib/passwd directory and a password under the /usr/etc/shadow directory during installation time. You can log in to the device with the username and the password you created for the blueprint. Note that after you access the system, you need to create users, for example, using the useradd command. Jira:RHELPLAN-149091 Red Hat build of MicroShift enablement for RHEL for Edge images With this enhancement, you can enable Red Hat build of MicroShift services in a RHEL for Edge system. By using the [[customizations.firewalld.zones]] blueprint customization, you can add support for firewalld sources in the blueprint customization. For that, specify a name for the zone and a list of sources in that specific zone. Sources can be of the form source[/mask]|MAC|ipset:ipset . The following is a blueprint example on how to configure and customize support for Red Hat build of MicroShift services in a RHEL for Edge system. The Red Hat build of MicroShift installation requirements, such as firewall policies, MicroShift RPM, systemd service, enable you to create a deployment ready for production to achieve workload portability to a minimum field deployed edge device and by default LVM device mapper enablement. Jira:RHELPLAN-136489 4.3. Software management New yum offline-upgrade command for offline updates on RHEL With this enhancement, you can apply offline updates to RHEL by using the new yum offline-upgrade command from the YUM system-upgrade plug-in. Important The yum system-upgrade command included in the system-upgrade plug-in is not supported on RHEL. Bugzilla:2054235 Applying advisory security filters to yum offline-upgrade is now supported With this enhancement, the new functionality for advisories filtering has been added. As a result, you can now download packages and their dependencies only from the specified advisory by using the yum offline-upgrade command with advisory security filters ( --advisory , --security , --bugfix , and other filters). Bugzilla:2139324 The unload_plugins function is now available for the YUM API With this enhancement, a new unload_plugins function has been added to the YUM API to allow plug-ins unloading. Important Note that you must first run the init_plugins function, and then run the unload_plugins function. Bugzilla:2047251 New --nocompression option for rpm2archive With this enhancement, the --nocompression option has been added to the rpm2archive utility. You can use this option to avoid compression when directly unpacking an RPM package. Bugzilla:2129345 4.4. Shells and command-line tools ReaR is now fully supported also on the 64-bit IBM Z architecture Basic Relax and Recover (ReaR) functionality, previously available on the 64-bit IBM Z architecture as a Technology Preview, is fully supported with the rear package version 2.6-9.el8 or later. You can create a ReaR rescue image on the IBM Z architecture in the z/VM environment only. Backing up and recovering logical partitions (LPARs) is not supported at the moment. ReaR supports saving and restoring disk layout only on Extended Count Key Data (ECKD) direct access storage devices (DASDs). Fixed Block Access (FBA) DASDs and SCSI disks attached through Fibre Channel Protocol (FCP) are not supported for this purpose. The only output method currently available is Initial Program Load (IPL), which produces a kernel and an initial ramdisk (initrd) compatible with the zIPL bootloader. For more information see Using a ReaR rescue image on the 64-bit IBM Z architecture . Bugzilla:2130206 , Bugzilla:1868421 4.5. Infrastructure services New synce4l package for frequency synchronization is now available SyncE (Synchronous Ethernet) is a hardware feature that enables PTP clocks to achieve precise synchronization of frequency at the physical layer. SyncE is supported in certain network interface cards (NICs) and network switches. With this enhancement, the new synce4l package is now available, which provides support for SyncE. As a result, Telco Radio Access Network (RAN) applications can now achieve more efficient communication due to more accurate time synchronization. Bugzilla:2019751 powertop rebased to version 2.15 The powertop package for improving the energy efficiency has been updated to version 2.15. Notable changes and enhancements include: Several Valgrind errors and possible buffer overrun have been fixed to improve the powertop tool stability. Improved compatibility with Ryzen processors and Kaby Lake platforms. Enabled Lake Field, Alder Lake N, and Raptor Lake platforms support. Enabled Ice Lake NNPI and Meteor Lake mobile and desktop support. Bugzilla:2040070 tuned rebased to version 2.20.0 The TuneD utility for optimizing the performance of applications and workloads has been updated to version 2.20.0. Notable changes and enhancements over version 2.19.0 include: An extension of API enables you to move devices between plug-in instances at runtime. The plugin_cpu module, which provides fine-tuning of CPU-related performance settings, introduces the following enhancements: The pm_qos_resume_latency_us feature enables you to limit the maximum time allowed for each CPU to transition from an idle state to an active state. TuneD adds support for the intel_pstate scaling driver, which provides scaling algorithms to tune the systems' power management based on different usage scenarios. The socket API to control TuneD through a Unix domain socket is now available as a Technology Preview. See Socket API for TuneD available as a Technology Preview for more information. Bugzilla:2133814 , Bugzilla:2113925 , Bugzilla:2118786 , Bugzilla:2095829 , Bugzilla:2113900 4.6. Security FIPS mode now has more secure settings that target FIPS 140-3 The FIPS mode settings in the kernel have been adjusted to conform to the Federal Information Processing Standard (FIPS) 140-3. This change introduces stricter settings to many cryptographic algorithms, functions, and cipher suites. Most notably: The Triple Data Encryption Standard (3DES), Elliptic-curve Diffie-Hellman (ECDH), and Finite-Field Diffie-Hellman (FFDH) algorithms are now disabled. This change affects Bluetooth, DH-related operations in the kernel keyring, and Intel QuickAssist Technology (QAT) cryptographic accelerators. The hash-based message authentication code (HMAC) key now cannot be shorter than 112 bits. The minimum key length is set to 2048 bits for Rivest-Shamir-Adleman (RSA) algorithms. Drivers that used the xts_check_key() function have been updated to use the xts_verify_key() function instead. The following Deterministic Random Bit Generator (DRBG) hash functions are now disabled: SHA-224, SHA-384, SHA512-224, SHA512-256, SHA3-224, and SHA3-384. Note Even though the RHEL 8.6 (and newer) kernel in FIPS mode is designed to be compliant with FIPS 140-3, it is not yet certified by the National Institute of Standards and Technology (NIST) Cryptographic Module Validation Program (CMVP). The latest certified kernel module is the updated RHEL 8.5 kernel after the RHSA-2021:4356 advisory update. That certification applies to the FIPS 140-2 standard. You cannot choose whether a cryptographic module conforms to FIPS 140-2 or 140-3. For more information, see the Compliance Activities and Government Standards: FIPS 140-2 and FIPS 140-3 Knowledgebase article. Bugzilla:2107595, Bugzilla:2158893, Bugzilla:2175234, Bugzilla:2166715, Bugzilla:2129392, Bugzilla:2152133 Libreswan rebased to 4.9 The libreswan packages have been upgraded to version 4.9. Notable changes over the version include: Added support for {left,right}pubkey= to the addconn and whack utilities Added key derivation function (KDF) self-tests Updated list of allowed system calls for the seccomp filter Show host's authentication key ( showhostkey ): Added support for Elliptic Curve Digital Signature Algorithm (ECDSA) pubkeys Added the --pem option to print Privacy-Enhanced Mail (PEM)-encoded public key The Internet Key Exchange Protocol Version 2 (IKEv2): Extensible Authentication Protocol - Transport Layer Security (EAP-TLS) support EAP-only authentication support Labeled IPsec improvements The pluto Internet Key Exchange (IKE) daemon: Support for maxbytes and maxpacket counters Changed default value of replay-window from 32 to 128 Changed the default value of esn= to either and preferred value to yes Disabled esn when replay-window= is set to 0 Dropped obsolete debug options such as crypto-low Bugzilla:2128672 SELinux now confines udftools With this update of the selinux-policy packages, SELinux confines the udftools service. Bugzilla:1972230 New SELinux policy for systemd-socket-proxyd Because the systemd-socket-proxyd service requires particular resources usage, a new policy with the required rules was added to the selinux-policy packages. As a result, the service runs in its SELinux domain. Bugzilla:2088441 OpenSCAP rebased to 1.3.7 The OpenSCAP packages have been rebased to upstream version 1.3.7. This version provides various bug fixes and enhancements, most notably: Fixed error when processing OVAL filters (rhbz#2126882) OpenSCAP no longer emits invalid empty xmlfilecontent items if XPath does not match (rhbz#2139060) Prevented Failed to check available memory errors (rhbz#2111040) Bugzilla:2159290 scap-security-guide rules for Rsyslog log files are compatible with RainerScript Rules in scap-security-guide for checking and remediating ownership, group ownership, and permissions of Rsyslog log files are now also compatible with log files defined by using the RainerScript syntax. Modern systems already use the RainerScript syntax in Rsyslog configuration files and the respective rules were not able to recognize this syntax. As a result, scap-security-guide rules can now check and remediate ownership, group ownership, and permissions of Rsyslog log files in both available syntaxes. Bugzilla:2072444 STIG security profile updated to version V1R9 The DISA STIG for Red Hat Enterprise Linux 8 profile in the SCAP Security Guide has been updated to align with the latest version V1R9 . This release also includes changes published in V1R8 . Use only the current version of this profile because versions are no longer valid. The following STIG IDs have been updated: V1R9 RHEL-08-010359 - Selected rule aide_build_database RHEL-08-010510 - Removed rule sshd_disable_compresssion RHEL-08-020040 - New rule to configure tmux keybinding RHEL-08-020041 - New rule to configure starting tmux instead of exec tmux V1R8 Multiple STIG IDs - The sshd and sysctl rules can identify and remove duplicate or conflicting configurations. RHEL-08-010200 - SSHD ClientAliveCountMax is configured with value 1 . RHEL-08-020352 - Check and remediations now ignore .bash_history . RHEL-08-040137 - Check updated to examine both /etc/fapolicyd/fapolicyd.rules and /etc/fapolicyd/complied.rules . Warning Automatic remediation might make the system non-functional. Run the remediation in a test environment first. Bugzilla:2152658 RHEL 8 STIG profiles are better aligned with the benchmark Four existing rules that satisfy RHEL 8 STIG requirements were part of the data stream but were previously not included in the STIG profiles ( stig and stig_gui ). With this update, the following rules are now included in the profiles: accounts_passwords_pam_faillock_dir accounts_passwords_pam_faillock_silent account_password_selinux_faillock_dir fapolicy_default_deny As a result, the RHEL 8 STIG profiles have a higher coverage. Bugzilla:2156192 SCAP Security Guide rebased to 0.1.66 The SCAP Security Guide (SSG) packages have been rebased to upstream version 0.1.66. This version provides various enhancements and bug fixes, most notably: Updated RHEL 8 STIG profiles Deprecated rule account_passwords_pam_faillock_audit in favor of accounts_passwords_pam_faillock_audit Bugzilla:2158404 OpenSSL driver can now use certificate chains in Rsyslog The NetstreamDriverCaExtraFiles directive allows configuring multiple additional certificate authority (CA) files. With this update, you can specify multiple CA files and the OpenSSL library can validate them, which is necessary for SSL certificate chains. As a result, you can use certificate chains in Rsyslog with the OpenSSL driver. Bugzilla:2124934 opencryptoki rebased to 3.19.0 The opencryptoki package has been rebased to version 3.19.0, which provides many enhancements and bug fixes. Most notably, opencryptoki now supports the following features: IBM-specific Dilithium keys Dual-function cryptographic functions Cancelling active session-based operations by using the new C_SessionCancel function, as described in the PKCS #11 Cryptographic Token Interface Base Specification v3.0 Schnorr signatures through the CKM_IBM_ECDSA_OTHER mechanism Bitcoin key derivation through the CKM_IBM_BTC_DERIVE mechanism EP11 tokens in IBM z16 systems Bugzilla:2110315 New SCAP rule for idle session termination New SCAP rule logind_session_timeout has been added to the scap-security-guide package in ANSSI-BP-028 profiles for Enhanced and High levels. This rule uses a new feature of the systemd service manager and terminates idle user sessions after a certain time. This rule provides automatic configuration of a robust idle session termination mechanism which is required by multiple security policies. As a result, OpenSCAP can automatically check the security requirement related to terminating idle user sessions and, if necessary, remediate it. Bugzilla:2122322 fapolicyd now provides filtering of the RPM database With the new configuration file /etc/fapolicyd/rpm-filter.conf , you can customize the list of RPM-database files that the fapolicyd software framework stores in the trust database. This way, you can block certain applications installed by RPM or allow an application denied by the default configuration filter. Bugzilla:2165645 4.7. Networking The default MPTCP subflow limit is 2 A subflow is a single TCP connection that is part of a Multipath TCP (MPTCP) connection. A subflow limit in MPTCP refers to the maximum number of additional connections that can be created between two MPTCP endpoints. You can use the limit to restrict the number of additional parallel subflows that can be created between the endpoints, to avoid overloading the network and the endpoints. For example the value of 0 allows only the initial subflow. With this enhancement, the default MPTCP subflow limit has been increased from 0 to 2. This enables you by default to create multiple additional subflows. If you need a different value, you can create a Systemd oneshot unit. The unit should execute the ip mptcp limits set subflows <YOUR_VALUE> command after your network ( network.target ) is operational during every boot process. Bugzilla:2127136 The kernel now logs the listening address in SYN flood messages This enhancement adds the listening IP address to SYN flood messages: As a result, if many processes are bound to the same port on different IP addresses, administrators can now clearly identify the affected socket. Bugzilla:2143849 The nm-initrd-generator profiles now have lower priority than autoconnect profiles The nm-initrd-generator early boot NetworkManager configuration generator utility generates and configures connection profiles by using the NetworkManager instance running in the boot loader's initialized initrd RAM disk. The nm-initrd-generator utility generated profiles now have a lower autoconnect priority than the default connection autoconnect priority. This enables generated network profiles in initrd to coexist with user configuration in default root account. Note After switching from initrd root account to default root, the same profile stays activated and no new autoconnect happens. Bugzilla:2089707 nispor rebased to version 1.2.10 The nispor packages have been upgraded to upstream version 1.2.10, which provides a number of enhancements and bug fixes over the version: Added support for NetStateFilter to use the kernel filter on network routes and interfaces. Single Root Input and Output Virtualization (SR-IOV) interfaces can query SR-IOV Virtual Function (SR-IOV VF) information per (VF). Newly supported bonding options: lacp_active , arp_missed_max , and ns_ip6_target . Bugzilla:2153166 NetworkManager rebased to version 1.40.16 The NetworkManager packages have been upgraded to upstream version 1.40.16, which provides a number of bug fixes over the version: The nm-cloud-setup utility preserves externally added addresses. A race condition was fixed that prevented the automatic activation of MACsec connections at boot. NetworkManager now correctly calculates expiration times for items configured from IPv6 neighbor discovery messages. NetworkManager now automatically updates the /etc/resolv.conf file when the configuration changes. NetworkManager no longer sets non-existent interfaces as primary when activating a bond. Setting a primary interface in a bond now always works, even if the interface does not exist when you active the bond. The NetworkManager --print-config command no longer prints duplicate entries. The ifcfg-rh plug-in can now read InfiniBand P-Key connection profiles without an explicit interface name. The nmcli utility can now remove a bond port connection profile from a bond. A race condition was fixed that could occur during the activation of veth profiles if the peer already existed. NetworkManager now rejects DHCPv6 leases if all addresses fail IPv6 duplicate address detection (DAD). NetworkManager now waits until interfaces are connected before trying to resolve the system hostname on these interfaces from DNS. Profiles created by the nm-initrd-generator utility now have a lower-than-default priority. For further information about notable changes, read the upstream release notes . Bugzilla:2134907 4.8. Kernel Kernel version in RHEL 8.8 Red Hat Enterprise Linux 8.8 is distributed with the kernel version 4.18.0-477.10. Bugzilla:2177769 Secure Execution guest dump encryption with customer keys This new feature allows Secure Execution guests to use hypervisor-initiated dumps to collect kernel crash information from KVM when the kdump utility does not work. Note that hypervisor-initiated dumps for Secure Execution are designed for the IBM Z Series z16 and LinuxONE Emperor 4 hardware. Bugzilla:2043833 The sfc driver has split into sfc and sfc_siena Following the changes in the upstream driver, the sfc NIC driver is now split into 2 different drivers: sfc and sfc_siena . sfc_siena supports the deprecated Siena family devices. Note that custom configurations of the kernel module parameters and udev rules applied to sfc do not affect sfc_siena as they are now independent drivers. To customize both drivers, replicate the configuration options for sfc_siena . Bugzilla:2136107 The stmmac driver is now fully supported Red Hat now fully supports the stmmac driver for Intel(R) Elkhart Lake systems on a chip (SoCs). Bugzilla:1905243 The rtla meta-tool adds the osnoise and timerlat tracers for improved tracer capabilities The Real-Time Linux Analysis ( rtla ) is a meta-tool that includes a set of commands that analyze the real-time properties of Linux. rtla leverages kernel tracing capabilities to provide accurate information about the properties and root causes of unexpected system results. rtla currently adds support for osnoise and timerlat tracer commands. The osnoise tracer reports a kernel thread per CPU. The timerlat tracer periodically prints the timer latency at the timer IRQ handler and the thread handler. Note that to use the timerlat feature of rtla , you must disable admission control by using the sysctl -w kernel.sched_rt_runtime_us=-1 script. Bugzilla:2075203 The output format for cgroups and irqs has been improved to provide better readability With this enhancement, the tuna show_threads command output for the cgroup utility is now structured based on the terminal size. You can also configure additional spacing to the cgroups output by adding the new -z or --spaced option to the show_threads command. As a result, you can now view the cgroups output in an improved readable format that is adaptable to your terminal size. Bugzilla:2121518 The rteval command output now includes the program loads and measurement threads information The rteval command now displays a report summary with the number of program loads, measurement threads, and the corresponding CPU that ran these threads. This information helps to evaluate the performance of a real-time kernel under load on specific hardware platforms. The rteval report is written to an XML file along with the boot log for the system and saved to the rteval-<date>-N-tar.bz2 compressed file. The date specifies the report generation date and N is the counter for the Nth run. To generate an rteval report, enter the following command: Bugzilla:2082260 The -W and --bucket-width options have been added to the oslat program to measure latency With this enhancement, you can specify a latency range for a single bucket at nanosecond accuracy. Widths that are not multiples of 1000 nanoseconds indicate nanosecond precision. By using the new options, -W or --bucket-width , you can modify the latency interval between buckets to measure latency within sub-microseconds delay time. For example to set a latency bucket width of 100 nanoseconds for 32 buckets over a duration of 10 seconds to run on CPU range of 1-4 and omit zero bucket size, run the following command: Note that before using the option, you must determine what level of precision is significant in relation to the error measurement. Bugzilla:2122374 The Ethernet Port Configuration Tool (EPCT) utility support enabled in E810 with Intel ice driver With this enhancement, the devlink port split command now supports the Intel ice driver. The Ethernet Port Configuration Tool (EPCT) is a command line utility that allows you to change the link type of a device. The devlink utility, which displays device information and resources of devices, is dependent on EPCT. As a result of this enhancement, the ice driver implements support for EPCT, which enables you to list and view the configurable devices using Intel ice drivers. Bugzilla:2009705 The Intel ice driver rebased to version 6.0.0 The Intel ice driver has been upgraded to upstream version 6.0.0, which provides a number of enhancements and bug fixes over versions. The notable enhancements include: Point-to-Point Protocol over Ethernet ( PPPoE ) protocol hardware offload Inter-Integrated Circuit ( I2C ) protocol write command VLAN Tag Protocol Identifier ( TPID ) filters in the Ethernet switch device driver model ( switchdev ) Double VLAN tagging in switchdev Bugzilla:2103946 Hosting Secure Boot certificates for IBM zSystems Starting with IBM z16 A02/AGZ and LinuxONE Rockhopper 4 LA2/AGL, you can manage certificates used to validate Linux kernels when starting the system with Secure Boot enabled on the Hardware Management Console (HMC). Notably: You can load certificates in a system certificate store using the HMC in DPM and classic mode from an FTP server that can be accessed by the HMC. It is also possible to load certificates from a USB device attached to the HMC. You can associate certificates stored in the certificate store with an LPAR partition. Multiple certificates can be associated with a partition and a certificate can be associated with multiple partitions. You can de-associate certificates in the certificate store from a partition by using HMC interfaces. You can remove certificates from the certificate store. You can associate up to 20 certificates with a partition. The built-in firmware certificates are still available. In particular, as soon as you use the user-managed certificate store, the built-in certificates will no longer be available. Certificate files loaded into the certificate store must meet the following requirements: They have the PEM- or DER-encoded X.509v3 format and one of the following filename extensions: .pem , .cer , .crt , or .der . They are not expired. The key usage attribute must be Digital Signature . The extended key usage attribute must contain Code Signing . A firmware interface allows a Linux kernel running in a logical partition to load the certificates associated with this partition. Linux on IBM Z stores these certificates in the .platform keyring, allowing the Linux kernel to verify kexec kernels and third party kernel modules to be verified using certificates associated with that partition. It is the responsibility of the operator to only upload verified certificates and to remove certificates that have been revoked. Note The Red Hat Secureboot 302 certificate that you need to load into the HMC is available at Product Signing Keys . Bugzilla:2183445 zipl support for Secure Boot IPL and dump on 64-bit IBM Z With this update, the zipl utility supports List-Directed IPL and List-Directed dump from Extended Count Key Data (ECKD) Direct Access Storage Devices (DASD) on the 64-bit IBM Z architecture. As a result, Secure Boot for RHEL on IBM Z also works with the ECKD type of DASDs. Bugzilla:2043852 4.9. High availability and clusters New enable-authfile Booth configuration option When you create a Booth configuration to use the Booth ticket manager in a cluster configuration, the pcs booth setup command now enables the new enable-authfile Booth configuration option by default. You can enable this option on an existing cluster with the pcs booth enable-authfile command. Additionally, the pcs status and pcs booth status commands now display warnings when they detect a possible enable-authfile misconfiguration. Bugzilla:2132582 pcs can now run the validate-all action of resource and stonith agents When creating or updating a resource or a STONITH device, you can now specify the --agent-validation option. With this option, pcs uses an agent's validate-all action, when it is available, in addition to the validation done by pcs based on the agent's metadata. Bugzilla:1816852 , Bugzilla:2159455 4.10. Dynamic programming languages, web and database servers Python 3.11 available in RHEL 8 RHEL 8.8 introduces Python 3.11, provided by the new package python3.11 and a suite of packages built for it, as well as the ubi8/python-311 container image. Notable enhancements compared to the previously released Python 3.9 include: Significantly improved performance. Structural Pattern Matching using the new match keyword (similar to switch in other languages). Improved error messages, for example, indicating unclosed parentheses or brackets. Exact line numbers for debugging and other use cases. Support for defining context managers across multiple lines by enclosing the definitions in parentheses. Various new features related to type hints and the typing module, such as the new X | Y type union operator, variadic generics, and the new Self type. Precise error locations in tracebacks pointing to the expression that caused the error. A new tomllib standard library module which supports parsing TOML. An ability to raise and handle multiple unrelated exceptions simultaneously using Exception Groups and the new except* syntax. Python 3.11 and packages built for it can be installed in parallel with Python 3.9, Python 3.8, and Python 3.6 on the same system. Note that, unlike the versions, Python 3.11 is distributed as standard RPM packages instead of a module. To install packages from the python3.11 stack, use, for example: To run the interpreter, use, for example: See Installing and using Python for more information. Note that Red Hat will continue to provide support for Python 3.6 until the end of life of RHEL 8. Similarly to Python 3.9, Python 3.11 will have a shorter life cycle; see Red Hat Enterprise Linux Application Streams Life Cycle . Bugzilla:2137139 nodejs:18 rebased to version 18.14 with npm rebased to version 9 Node.js 18.14 , released in RHSA-2023:1583 , includes a SemVer major upgrade of npm from version 8 to version 9. This update was necessary due to maintenance reasons and may require you to adjust your npm configuration. Notably, auth-related settings that are not scoped to a specific registry are no longer supported. This change was made for security reasons. If you used unscoped authentication configurations, the supplied token was sent to every registry listed in the .npmrc file. If you use unscoped authentication tokens, generate and supply registry-scoped tokens in your .npmrc file. If you have configuration lines using _auth , such as //registry.npmjs.org/:_auth in your .npmrc files, replace them with //registry.npmjs.org/:_authToken=USD{NPM_TOKEN} and supply the scoped token that you generated. For a complete list of changes, see the upstream changelog . Bugzilla:2178087 git rebased to version 2.39.1 The Git version control system has been updated to version 2.39.1, which provides bug fixes, enhancements, and performance improvements over the previously released version 2.31. Notable enhancements include: The git log command now supports a format placeholder for the git describe output: git log --format=%(describe) The git commit command now supports the --fixup<commit> option which enables you to fix the content of the commit without changing the log message. With this update, you can also use: The --fixup=amend:<commit> option to change both the message and the content. The --fixup=reword:<commit> option to update only the commit message. You can use the new --reject-shallow option with the git clone command to disable cloning from a shallow repository. The git branch command now supports the --recurse-submodules option. You can now use the git merge-tree command to: Test if two branches can merge. Compute a tree that would result in the merge commit if the branches were merged. You can use the new safe.bareRepository configuration variable to filter out bare repositories. Bugzilla:2139378 git-lfs rebased to version 3.2.0 The Git Large File Storage (LFS) extension has been updated to version 3.2.0, which provides bug fixes, enhancements, and performance improvements over the previously released version 2.13. Notable changes include: Git LFS introduces a pure SSH-based transport protocol. Git LFS now provides a merge driver. The git lfs fsck utility now additionally checks that pointers are canonical and that expected LFS files have the correct format. Support for the NT LAN Manager (NTLM) authentication protocol has been removed. Use Kerberos or Basic authentication instead. Bugzilla:2139382 A new module stream: nginx:1.22 The nginx 1.22 web and proxy server is now available as the nginx:1.22 module stream. This update provides a number of bug fixes, security fixes, new features, and enhancements over the previously released version 1.20. New features: nginx now supports: OpenSSL 3.0 and the SSL_sendfile() function when using OpenSSL 3.0. The PCRE2 library. POP3 and IMAP pipelining in the mail proxy module. nginx now passes the Auth-SSL-Protocol and Auth-SSL-Cipher header lines to the mail proxy authentication server. Enhanced directives: Multiple new directives are now available, such as ssl_conf_command and ssl_reject_handshake . The proxy_cookie_flags directive now supports variables. nginx now supports variables in the following directives: proxy_ssl_certificate , proxy_ssl_certificate_key , grpc_ssl_certificate , grpc_ssl_certificate_key , uwsgi_ssl_certificate , and uwsgi_ssl_certificate_key . The listen directive in the stream module now supports a new fastopen parameter, which enables TCP Fast Open mode for listening sockets. A new max_errors directive has been added to the mail proxy module. Other changes: nginx now always returns an error if: The CONNECT method is used. Both Content-Length and Transfer-Encoding headers are specified in the request. The request header name contains spaces or control characters. The Host request header line contains spaces or control characters. nginx now blocks all HTTP/1.0 requests that include the Transfer-Encoding header. nginx now establishes HTTP/2 connections using the Application Layer Protocol Negotiation (ALPN) and no longer supports the Protocol Negotiation (NPN) protocol. To install the nginx:1.22 stream, use: If you want to upgrade from the nginx:1.20 stream, see Switching to a later stream . For more information, see Setting up and configuring NGINX . For information about the length of support for the nginx module streams, see the Red Hat Enterprise Linux Application Streams Life Cycle . Bugzilla:2112345 mod_security rebased to version 2.9.6 The mod_security module for the Apache HTTP Server has been updated to version 2.9.6, which provides new features, bug fixes, and security fixes over the previously available version 2.9.2. Notable enhancements include: Adjusted parser activation rules in the modsecurity.conf-recommended file. Enhancements to the way mod_security parses HTTP multipart requests. Added a new MULTIPART_PART_HEADERS collection. Added microsec timestamp resolution to the formatted log timestamp. Added missing Geo Countries. Bugzilla:2143207 New packages: tomcat RHEL 8.8 introduces the Apache Tomcat server version 9. Tomcat is the servlet container that is used in the official Reference Implementation for the Java Servlet and JavaServer Pages technologies. The Java Servlet and JavaServer Pages specifications are developed by Sun under the Java Community Process. Tomcat is developed in an open and participatory environment and released under the Apache Software License version 2.0. Bugzilla:2160455 A new module stream: postgresql:15 RHEL 8.8 introduces PostgreSQL 15 , which provides a number of new features and enhancements over version 13. Notable changes include: You can now access PostgreSQL JSON data by using subscripts. Example query: PostgreSQL now supports multirange data types and extends the range_agg function to aggregate multirange data types. PostgreSQL improves monitoring and observability: You can now track progress of the COPY commands and Write-ahead-log (WAL) activity. PostgreSQL now provides statistics on replication slots. By enabling the compute_query_id parameter, you can now uniquely track a query through several PostgreSQL features, including pg_stat_activity or EXPLAIN VERBOSE . PostgreSQL improves support for query parallelism by the following: Improved performance of parallel sequential scans. The ability of SQL Procedural Language ( PL/pgSQL ) to execute parallel queries when using the RETURN QUERY command. Enabled parallelism in the REFRESH MATERIALIZED VIEW command. PostgreSQL now includes the SQL standard MERGE command. You can use MERGE to write conditional SQL statements that can include the INSERT , UPDATE , and DELETE actions in a single statement. PostgreSQL provides the following new functions for using regular expressions to inspect strings: regexp_count() , regexp_instr() , regexp_like() , and regexp_substr() . PostgreSQL adds the security_invoker parameter, which you can use to query data with the permissions of the view caller, not the view creator. This helps you ensure that view callers have the correct permissions for working with the underlying data. PostgreSQL improves performance, namely in its archiving and backup facilities. PostgreSQL adds support for the LZ4 and Zstandard ( zstd ) lossless compression algorithms. PostgreSQL improves its in-memory and on-disk sorting algorithms. The updated postgresql.service systemd unit file now ensures that the postgresql service is started after the network is up. The following changes are backwards incompatible: The default permissions of the public schema have been modified. Newly created users need to grant permission explicitly by using the GRANT ALL ON SCHEMA public TO myuser; command. For example: The libpq PQsendQuery() function is no longer supported in pipeline mode. Modify affected applications to use the PQsendQueryParams() function instead. See also Using PostgreSQL . To install the postgresql:15 stream, use: If you want to upgrade from an earlier postgresql stream within RHEL 8, follow the procedure described in Switching to a later stream and then migrate your PostgreSQL data as described in Migrating to a RHEL 8 version of PostgreSQL . For information about the length of support for the postgresql module streams, see the Red Hat Enterprise Linux Application Streams Life Cycle . Bugzilla:2128241 4.11. Compilers and development tools A new module stream: swig:4.1 RHEL 8.8 introduces the Simplified Wrapper and Interface Generator (SWIG) version 4.1, available as a new module stream, swig:4.1 . Compared to SWIG 4.0 released in RHEL 8.4, SWIG 4.1 : Adds support for Node.js versions 12 to 18 and removes support for Node.js versions earlier than 6. Adds support for PHP 8 . Handles PHP wrapping entirely through PHP C API and no longer generates a .php wrapper by default. Supports only Perl 5.8.0 and later versions. Adds support for Python versions 3.9 to 3.11. Supports only Python 3.3 and later Python 3 versions, and Python 2.7 . Provides fixes for various memory leaks in Python -generated code. Improves support for the C99, C++11, C++14, and C++17 standards and starts implementing the C++20 standard. Adds support for the C++ std::unique_ptr pointer class. Includes several minor improvements in C++ template handling. Fixes C++ declaration usage in various cases. To install the swig:4.1 module stream, use: If you want to upgrade from an earlier swig module stream, see Switching to a later stream . For information about the length of support for the swig module streams, see the Red Hat Enterprise Linux Application Streams Life Cycle . Bugzilla:2139076 A new module stream: jaxb:4 RHEL 8.8 introduces Jakarta XML Binding (JAXB) 4 as the new jaxb:4 module stream. JAXB is a framework that enables developers to map Java classes to and from XML representations. To install the jaxb:4 module stream, use: Bugzilla:2055539 Updated GCC Toolset 12 GCC Toolset 12 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. Notable changes introduced in RHEL 8.8 include: The GCC compiler has been updated to version 12.2.1, which provides many bug fixes and enhancements that are available in upstream GCC. annobin has been updated to version 11.08. The following tools and versions are provided by GCC Toolset 12: Tool Version GCC 12.2.1 GDB 11.2 binutils 2.38 dwz 0.14 annobin 11.08 To install GCC Toolset 12, run the following command as root: To run a tool from GCC Toolset 12: To run a shell session where tool versions from GCC Toolset 12 override system versions of these tools: For more information, see GCC Toolset 12 . Bugzilla:2110582 Security improvements added for glibc The SafeLinking feature has been added to glibc . As a result, it improves protection for the malloc family of functions against certain single-linked list corruption including the allocator's thread-local cache. Bugzilla:1871383 Improved glibc dynamic loader algorithm The glibc dynamic loader's O(n 3 ) algorithm for processing shared objects could result in slower application startup and shutdown times when shared object dependencies are deeply nested. With this update, the dynamic loader's algorithm has been improved to use a depth-first search (DFS). As a result, application startup and shutdown times are greatly improved in cases where shared object dependencies are deeply nested. You can select the dynamic loader's O(n 3 ) algorithm by using the glibc runtime tunable glibc.rtld.dynamic_sort . The default value of the tunable is 2, representing the new DFS algorithm. To select the O(n 3 ) algorithm for compatibility, set the tunable to 1: Bugzilla:1159809 LLVM Toolset rebased to version 15.0.7 LLVM Toolset has been updated to version 15.0.7. Notable changes include: The -Wimplicit-function-declaration and -Wimplicit-int warnings are enabled by default in C99 and newer. These warnings will become errors by default in Clang 16 and beyond. Bugzilla:2118568 Rust Toolset rebased to version 1.66.1 Rust Toolset has been updated to version 1.66.1. Notable changes include: The thread::scope API creates a lexical scope in which local variables can be safely borrowed by newly spawned threads, and those threads are all guaranteed to exit before the scope ends. The hint::black_box API adds a barrier to compiler optimization, which is useful for preserving behavior in benchmarks that might otherwise be optimized away. The .await keyword now makes conversions with the IntoFuture trait, similar to the relationship between for and IntoIterator . Generic associated types (GATs) allow traits to include type aliases with generic parameters, enabling new abstractions over both types and lifetimes. A new let - else statement allows binding local variables with conditional pattern matching, executing a divergent else block when the pattern does not match. Labeled blocks allow break statements to jump to the end of the block, optionally including an expression value. rust-analyzer is a new implementation of the Language Server Protocol, enabling Rust support in many editors. This replaces the former rls package, but you might need to adjust your editor configuration to migrate to rust-analyzer . Cargo has a new cargo remove subcommand for removing dependencies from Cargo.toml . Bugzilla:2123899 Go Toolset rebased to version 1.19.4 Go Toolset has been updated to version 1.19.4. Notable changes include: Security fixes to the following packages: crypto/tls mime/multipart net/http path/filepath Bug fixes to: The go command The linker The runtime The crypto/x509 package The net/http package The time package Bugzilla:2174430 The tzdata package now includes the /usr/share/zoneinfo/leap-seconds.list file Previously, the tzdata package only shipped the /usr/share/zoneinfo/leapseconds file. Some applications rely on the alternate format provided by the /usr/share/zoneinfo/leap-seconds.list file and, as a consequence, would experience errors. With this update, the tzdata package now includes both files, supporting applications that rely on either format. Bugzilla:2154109 4.12. Identity Management SSSD support for converting home directories to lowercase With this enhancement, you can now configure SSSD to convert user home directories to lowercase. This helps to integrate better with the case-sensitive nature of the RHEL environment. The override_homedir option in the [nss] section of the /etc/sssd/sssd.conf file now recognizes the %h template value. If you use %h as part of the override_homedir definition, SSSD replaces %h with the user's home directory in lowercase. Jira:RHELPLAN-139430 The ipapwpolicy ansible-freeipa module now supports new password policy options With this update, the ipapwpolicy module included in the ansible-freeipa package supports additional libpwquality library options: maxrepeat Specifies the maximum number of the same character in sequence. maxsequence Specifies the maximum length of monotonic character sequences ( abcd ). dictcheck Checks if the password is a dictionary word. usercheck Checks if the password contains the username. If any of the new password policy options are set, the minimum length of passwords is 6 characters. The new password policy settings are applied only to new passwords. In a mixed environment with RHEL 7 and RHEL 8 servers, the new password policy settings are enforced only on servers running on RHEL 8.4 and later. If a user is logged in to an IdM client and the IdM client is communicating with an IdM server running on RHEL 8.3 or earlier, then the new password policy requirements set by the system administrator do not apply. To ensure consistent behavior, upgrade all servers to RHEL 8.4 and later. Jira:RHELPLAN-137416 IdM now supports the ipanetgroup Ansible management module As an Identity Management (IdM) system administrator, you can integrate IdM with NIS domains and netgroups. Using the ipanetgroup ansible-freeipa module, you can achieve the following: You can ensure that an existing IdM netgroup contains specific IdM users, groups, hosts and host groups and nested IdM netgroups. You can ensure that specific IdM users, groups, hosts and host groups and nested IdM netgroups are absent from an existing IdM netgroup. You can ensure that a specific netgroup is present or absent in IdM. Jira:RHELPLAN-137411 New ipaclient_configure_dns_resolver and ipaclient_dns_servers Ansible ipaclient role variables specifying the client's DNS resolver Previously, when using the ansible-freeipa ipaclient role to install an Identity Management (IdM) client, it was not possible to specify the DNS resolver during the installation process. You had to configure the DNS resolver before the installation. With this enhancement, you can specify the DNS resolver when using the ipaclient role to install an IdM client with the ipaclient_configure_dns_resolver and ipaclient_dns_servers variables. Consequently, the ipaclient role modifies the resolv.conf file and the NetworkManager and systemd-resolved utilities to configure the DNS resolver on the client in a similar way that the ansible-freeipa ipaserver role does on the IdM server. As a result, configuring DNS when using the ipaclient role to install an IdM client is now more efficient. Note Using the ipa-client-install command-line installer to install an IdM client still requires configuring the DNS resolver before the installation. Jira:RHELPLAN-137406 Using the ipaclient role to install an IdM client with an OTP requires no prior modification of the Ansible controller Previously, the kinit command on the Ansible controller was a prerequisite for obtaining a one-time-password (OTP) for Identity Management (IdM) client deployment. The need to obtain the OTP on the controller was a problem for Red Hat Ansible Automation Platform (AAP), where the krb5-workstation package was not installed by default. With this update, the request for the administrator's TGT is now delegated to the first specified or discovered IdM server. As a result, you can now use an OTP to authorize the installation of an IdM client with no additional modification of the Ansible controller. This simplifies using the ipaclient role with AAP. Jira:RHELPLAN-137403 SSSD now supports changing LDAP user passwords with the shadow password policy With this enhancement, if you set ldap_pwd_policy to shadow in the /etc/sssd/sssd.conf file, LDAP users can now change their password stored in LDAP. Previously, password changes were rejected if ldap_pwd_policy was set to shadow as it was not clear if the corresponding shadow LDAP attributes were being updated. Additionally, if the LDAP server cannot update the shadow attributes automatically, set the ldap_chpass_update_last_change option to True in the /etc/sssd/sssd.conf file to indicate to SSSD to update the attribute. Bugzilla:2144519 Configure pam_pwhistory using a configuration file With this update, you can configure the pam_pwhistory module in the /etc/security/pwhistory.conf configuration file. The pam_pwhistory module saves the last password for each user in order to manage password change history. Support has also been added in authselect which allows you to add the pam_pwhistory module to the PAM stack. Bugzilla:2068461 , Bugzilla:2063379 getcert add-scep-ca now checks if user-provided SCEP CA certificates are in a valid PEM format To add a SCEP CA to certmonger using the getcert add-scep-ca command, the provided certificate must be in a valid PEM format. Previously, the command did not check the user-provided certificate and did not return an error in case of an incorrect format. With this update, getcert add-scep-ca now checks the user-provided certificate and returns an error if the certificate is not in the valid PEM format. Bugzilla:2150025 IdM now supports new Active Directory certificate mapping templates Active Directory (AD) domain administrators can manually map certificates to a user in AD using the altSecurityIdentities attribute. There are six supported values for this attribute, though three mappings are now considered insecure. As part of May 10,2022 security update , once this update is installed on a domain controller, all devices are in compatibility mode. If a certificate is weakly mapped to a user, authentication occurs as expected but a warning message is logged identifying the certificates that are not compatible with full enforcement mode. As of November 14, 2023 or later, all devices will be updated to full enforcement mode and if a certificate fails the strong mapping criteria, authentication will be denied. IdM now supports the new mapping templates, making it easier for an AD administrator to use the new rules and not maintain both. IdM now supports the following new mapping templates : Serial Number: LDAPU1:(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<SR>{serial_number!hex_ur}) Subject Key Id: LDAPU1:(altSecurityIdentities=X509:<SKI>{subject_key_id!hex_u}) User SID: LDAPU1:(objectsid={sid}) If you do not want to reissue certificates with the new SID extension, you can create a manual mapping by adding the appropriate mapping string to a user's altSecurityIdentities attribute in AD. Bugzilla:2087247 samba rebased to version 4.17.5 The samba packages have been upgraded to upstream version 4.17.5, which provides bug fixes and enhancements over the version. The most notable changes: Security improvements in releases impacted the performance of the Server Message Block (SMB) server for high meta data workloads. This update improves he performance in this scenario. The --json option was added to the smbstatus utility to display detailed status information in JSON format. The samba.smb.conf and samba.samba3.smb.conf modules have been added to the smbconf Python API. You can use them in Python programs to read and, optionally, write the Samba configuration natively. Note that the server message block version 1 (SMB1) protocol is deprecated since Samba 4.11 and will be removed in a future release. Back up the database files before starting Samba. When the smbd , nmbd , or winbind services start, Samba automatically updates its tdb database files. Red Hat does not support downgrading tdb database files. After updating Samba, use the testparm utility to verify the /etc/samba/smb.conf file. For further information about notable changes, read the upstream release notes before updating. Bugzilla:2132051 ipa-client-install now supports authentication with PKINIT Previously, the ipa-client-install supported only password based authentication. This update provides support to ipa-client-install for authentication with PKINIT. For example: To use the PKINIT authentication, you must establish trust between IdM and the CA chain of the PKINIT certificate. For more information see the ipa-cacert-manage(1) man page. Also, the certificate identity mapping rules must map the PKINIT certificate of the host to a principal that has permission to add or modify a host record. For more information see the ipa certmaprule-add man page. Bugzilla:2075452 Directory server now supports ECDSA private keys for TLS Previously, you could not use cryptographic algorithms that are stronger than RSA to secure Directory Server connections. With this enhancement, Directory Server now supports both ECDSA and RSA keys. Bugzilla:2096795 New pamModuleIsThreadSafe configuration option is now available When a PAM module is thread-safe, you can improve the PAM authentication throughput and response time of that specific module, by setting the new pamModuleIsThreadSafe configuration option to yes : This configuration applies on the PAM module configuration entry (child of cn=PAM Pass Through Auth,cn=plugins,cn=config ). Use pamModuleIsThreadSafe option in the dse.ldif configuration file or the ldapmodify command. Note that the ldapmodify command requires you to restart the server. Bugzilla:2142639 New nsslapd-auditlog-display-attrs configuration parameter for the Directory Server audit log Previously, the distinguished name (DN) was the only way to identify the target entry in the audit log event. With the new nsslapd-auditlog-display-attrs parameter, you can configure Directory Server to display additional attributes in the audit log, providing more details about the modified entry.. For example, if you set the nsslapd-auditlog-display-attrs parameter to cn , the audit log displays the entry cn attribute in the output. To include all attributes of a modified entry, use an asterisk ( * ) as the parameter value. For more information, see nsslapd-auditlog-display-attrs . Bugzilla:2136610 4.13. Desktop The inkscape1 package replaces inkscape With this release, the new, non-modular inkscape1 package replaces the legacy, modular inkscape package. This also upgrades the Inkscape application from version 0.92 to version 1.0. Inkscape 1.0 no longer depends on the Python 2 runtime and instead uses Python 3. For the complete list of changes in Inkscape 1.0, see the upstream release notes: https://inkscape.org/release/inkscape-1.0/ . Jira:RHELPLAN-121672 Kiosk mode supports an on-screen keyboard You can now use the GNOME on-screen keyboard (OSK) in the kiosk mode session. To enable the OSK, select the Kiosk (with on-screen keyboard) option from the gear menu at the login screen. Note that kiosk mode in RHEL 8 is based on the X11 protocol, which causes certain known issues with the OSK. Notably, you cannot type accented characters, such as e or u , on the OSK. See BZ#1916470 for details. Bugzilla:2070976 Support for NTLMv2 in libsoup and Evolution The libsoup library can now authenticate with the Microsoft Exchange Server using the NT LAN Manager version 2 (NTLMv2) protocol. Previously, libsoup supported only the NTLMv1 protocol, which might be disabled in certain configurations due to security issues. As a result, Evolution and other applications that internally use libsoup can also authenticate with the Microsoft Exchange Server using NTLMv2. Bugzilla:1938011 Custom right-click menu on the desktop You can now customize the menu that opens when you right-click the desktop background. You can create custom entries in the menu that run arbitrary commands. To customize the menu, see Customizing the right-click menu on the desktop . Bugzilla:2033572 Disable swipe to switch workspaces Previously, swiping up or down with three fingers always switched the workspace on a touch screen. With this release, you can disable the workspace switching. For details, see Disabling swipe to switch workspaces . Bugzilla:2138109 4.14. The web console The web console now performs additional steps for binding LUKS-encrypted root volumes to NBDE With this update, the RHEL web console performs additional steps required for binding LUKS-encrypted root volumes to Network-Bound Disk Encryption (NBDE) deployments. After you select an encrypted root file system and a Tang server, you can skip adding the rd.neednet=1 parameter to the kernel command line, installing the clevis-dracut package, and regenerating an initial ramdisk ( initrd ). For non-root file systems, the web console now enables the remote-cryptsetup.target and clevis-luks-akspass.path systemd units, installs the clevis-systemd package, and adds the _netdev parameter to the fstab and crypttab configuration files. As a result, you can now use the graphical interface for all Clevis-client configuration steps when creating NBDE deployments for automated unlocking of LUKS-encrypted root volumes. Jira:RHELPLAN-139125 Certain cryptographic subpolicies are now available in the web console This update of the RHEL web console extends the options in the Change crypto policy dialog. Besides the four system-wide cryptographic policies, you can also apply the following subpolicies through the graphical interface now: DEFAULT:SHA1 is the DEFAULT policy with the SHA-1 algorithm enabled. LEGACY:AD-SUPPORT is the LEGACY policy with less secure settings that improve interoperability for Active Directory services. FIPS:OSPP is the FIPS policy with further restrictions inspired by the Common Criteria for Information Technology Security Evaluation standard. Jira:RHELPLAN-137505 4.15. Red Hat Enterprise Linux system roles New IPsec customization parameters for the vpn RHEL system role Because certain network devices require IPsec customization to work correctly, the following parameters have been added to the vpn RHEL system role: Important Do not change the following parameters without advanced knowledge. Most scenarios do not require their customization. Furthermore, for security reasons, encrypt a value of the shared_key_content parameter by using Ansible Vault. Tunnel parameters: shared_key_content ike esp ikelifetime salifetime retransmit_timeout dpddelay dpdtimeout dpdaction leftupdown Per-host parameters: leftid rightid As a result, you can use the vpn role to configure IPsec connectivity to a wide range of network devices. Bugzilla:2119600 The ha_cluster system role now supports automated execution of the firewall , selinux , and certificate system roles The ha_cluster RHEL system role now supports the following features: Using the firewall and selinux system roles to manage port access To configure the ports of a cluster to run the firewalld and selinux services, you can set the new role variables ha_cluster_manage_firewall and ha_cluster_manage_selinux to true . This configures the cluster to use the firewall and selinux system roles, automating and performing these operations within the ha_cluster system role. If these variables are set to their default value of false , the roles are not performed. With this release, the firewall is no longer configured by default, because it is configured only when ha_cluster_manage_firewall is set to true . Using the certificate system role to create a pcsd private key and certificate pair The ha_cluster system role now supports the ha_cluster_pcsd_certificates role variable. Setting this variable passes on its value to the certificate_requests variable of the certificate system role. This provides an alternative method for creating the private key and certificate pair for pcsd . Bugzilla:2130019 The ha_cluster system role now supports quorum device configuration A quorum device acts as a third-party arbitration device for a cluster. A quorum device is recommended for clusters with an even number of nodes. With two-node clusters, the use of a quorum device can better determine which node survives in a split-brain situation. You can now configure a quorum device with the ha_cluster system role, both qdevice for a cluster and qnetd for an arbitration node. Bugzilla:2143814 The metrics system role does not work with disabled fact gathering Ansible fact gathering might be disabled in your environment for performance or other reasons. In such configurations, it is not currently possible to use the metrics system role. To work around this problem, enable fact caching, or do not use the metrics system role if it is not possible to use fact gathering. Bugzilla:2079009 The postfix RHEL system role can now use the firewall and selinux RHEL system roles to manage port access With this enhancement, you can automate managing port access by using the new role variables postfix_manage_firewall and postfix_manage_selinux : If they are set to true , each role is used to manage the port access. If they are set to false , which is default, the roles do not engage. Bugzilla:2130332 The vpn RHEL system role can now use the firewall and selinux roles to manage port access With this enhancement, you can automate managing port access in the vpn RHEL system role through the firewall and selinux roles. If you set the new role variables vpn_manage_firewall and vpn_manage_selinux to true , the roles manage port access. Bugzilla:2130345 The metrics RHEL system role now can use the firewall role and the selinux role to manage port access With this enhancement, you can control access to ports. If you set the new role variables metrics_manage_firewall and metrics_manage_firewall to true , the roles will manage port access. You can now automate and perform these operations directly by using the metrics role. Bugzilla:2133532 The nbde_server RHEL system role now can use the firewall and selinux roles to manage port access With this enhancement, you can use the firewall and selinux roles to manage ports access. If you set the new role variables nbde_server_manage_firewall and nbde_server_manage_selinux to true , the roles manage port access. You can now automate these operations directly by using the nbde_server role. Bugzilla:2133931 The initscripts network provider supports route metric configuration of the default gateway With this update, you can use the initscripts network provider in the rhel-system-roles.network RHEL system role to configure the route metric of the default gateway. The reasons for such a configuration could be: Distributing the traffic load across the different paths Specifying primary routes and backup routes Leveraging routing policies to send traffic to specific destinations through specific paths Bugzilla:2134201 The network system role supports setting a DNS priority value This enhancement adds the dns_priority parameter to the RHEL network system role. You can set this parameter to a value from -2147483648 to 2147483647 . The default value is 0 . Lower values have a higher priority. Note that negative values cause the system role to exclude other configurations with a greater numeric priority value. Consequently, in presence of at least one negative priority value, the system role uses only DNS servers from connection profiles with the lowest priority value. As a result, you can use the network system role to define the order of DNS servers in different connection profiles. Bugzilla:2133856 Added support for the cloned MAC address Cloned MAC address is the MAC address of the device WAN port which is the same as the MAC address of the machine. With this update, users can specify the bonding or bridge interface with the MAC address or the strategy such as random or preserve to get the default MAC address for the bonding or bridge interface. Bugzilla:2143458 The cockpit RHEL system role integration with the firewall , selinux , and certificate roles This enhancement enables you to integrate the cockpit role with the firewall role and the selinux role to manage port access and the certificate role to generate certificates. To control the port access, use the new cockpit_manage_firewall and cockpit_manage_selinux variables. Both variables are set to false by default and are not executed. Set them to true to allow the firewall and selinux roles to manage the RHEL web console service port access. The operations will then be executed within the cockpit role. Note that you are responsible for managing port access for firewall and SELinux. To generate certificates, use the new cockpit_certificates variable. The variable is set to false by default and is not executed. You can use this variable the same way you would use the certificate_request variable in the certificate role. The cockpit role will then use the certificate role to manage the RHEL web console certificates. Bugzilla:2137667 The selinux RHEL system role now supports the local parameter This update of the selinux RHEL system role introduces support for the local parameter. By using this parameter, you can remove only your local policy modifications and preserve the built-in SELinux policy. Bugzilla:2143385 New RHEL system role for direct integration with Active Directory The new rhel-system-roles.ad_integration RHEL system role was added to the rhel-system-roles package. As a result, administrators can now automate direct integration of a RHEL system with an Active Directory domain. Bugzilla:2144876 New Ansible Role for Red Hat Insights and subscription management The rhel-system-roles package now includes the remote host configuration ( rhc ) system role. This role enables administrators to easily register RHEL systems to Red Hat Subscription Management (RHSM) and Satellite servers. By default, when you register a system by using the rhc system role, the system connects to Red Hat Insights. With the new rhc system role, administrators can now automate the following tasks on the managed nodes: Configure the connection to Red Hat Insights, including automatic update, remediations, and tags for the system. Enable and disable repositories. Configure the proxy to use for the connection. Set the release of the system. For more information about how to automate these tasks, see Using the RHC system role to register the system . Bugzilla:2144877 Microsoft SQL Server Ansible role supports asynchronous high availability replicas Previously, Microsoft SQL Server Ansible role supported only primary, synchronous, and witness high availability replicas. Now, you can set the mssql_ha_replica_type variable to asynchronous to configure it with asynchronous replica type for a new or existing replica. Bugzilla:2144820 Microsoft SQL Server Ansible role supports the read-scale cluster type Previously, Microsoft SQL Ansible role supported only the external cluster type. Now, you can configure the role with a new variable mssql_ha_ag_cluster_type . The default value is external , use it to configure the cluster with Pacemaker. To configure the cluster without Pacemaker, use the value none for that variable. Bugzilla:2144821 Microsoft SQL Server Ansible role can generate TLS certificates Previously, you needed to generate a TLS certificate and a private key on the nodes manually before configuring the Microsoft SQL Ansible role. With this update, the Microsoft SQL Server Ansible role can use the redhat.rhel_system_roles.certificate role for that purpose. Now, you can set the mssql_tls_certificates variable in the format of the certificate_requests variable of the certificate role to generate a TLS certificate and a private key on the node. Bugzilla:2144852 Microsoft SQL Server Ansible role supports configuring SQL Server version 2022 Previously, Microsoft SQL Ansible role supported only configuring SQL Server version 2017 and version 2019. This update provides you with the support for SQL Server version 2022 for Microsoft SQL Ansible role. Now, you can set mssql_version value to 2022 for configuring a new SQL Server 2022 or upgrading SQL Server from version 2019 to version 2022. Note that upgrade of an SQL Server from version 2017 to version 2022 is unavailable. Bugzilla:2153428 Microsoft SQL Server Ansible role supports configuration of the Active Directory authentication With this update, the Microsoft SQL Ansible role supports configuration of the Active Directory authentication for an SQL Server. Now, you can configure the Active Directory authentication by setting variables with the mssql_ad_ prefix. Bugzilla:2163696 The logging RHEL system role integration with the firewall , selinux , and certificate roles This enhancement enables you to integrate the logging role with the firewall role and the selinux role to manage port access and the certificate role to generate certificates. To control the port access, use the new logging_manage_firewall and logging_manage_selinux variables. Both variables are set to false by default and are not executed. Set them to true to execute the roles within the logging role. Note that you are responsible for managing port access for firewall and SELinux. To generate certificates, use the new logging_certificates variable. The variable is set to false by default and the certificate role is not executed. You can use this variable the same way you would use the certificate_request variable in the certificate role. The logging role will then use the certificate role to manage the certificates. Bugzilla:2130362 Routing rule is able to look up a route table by its name With this update, the rhel-system-roles.network RHEL system role supports looking up a route table by its name when you define a routing rule. This feature provides quick navigation for complex network configurations where you need to have different routing rules for different network segments. Bugzilla:2129620 Microsoft SQL Server Ansible role supports configuring SQL Server version 2022 Previously, Microsoft SQL Ansible role supported only configuring SQL Server version 2017 and version 2019. This update provides you with the support for SQL Server version 2022 for Microsoft SQL Ansible role. Now, you can set mssql_version value to 2022 for configuring a new SQL Server 2022 or upgrading SQL Server from version 2019 to version 2022. Note that upgrade of an SQL Server from version 2017 to version 2022 is unavailable. Bugzilla:2153427 The journald RHEL system role is now available The journald service collects and stores log data in a centralized database. With this enhancement, you can use the journald system role variables to automate the configuration of the systemd journal, and configure persistent logging by using the Red Hat Ansible Automation Platform. Bugzilla:2165176 The sshd RHEL system role can now use the firewall and selinux RHEL system roles to manage port access With this enhancement, you can automate managing port access by using the new role variables sshd_manage_firewall and sshd_manage_selinux . If they are set to true , each role is used to manage the port access. If they are set to false , which is default, the roles do not engage. Bugzilla:2149683 4.16. Virtualization Hardware cryptographic devices can now be automatically hot-plugged Previously, it was only possible to define cryptographic devices for passthrough if they were present on the host before the mediated device was started. Now, you can define a mediated device matrix that lists all the cryptographic devices that you want to pass through to your virtual machine (VM). As a result, the specified cryptographic devices are automatically passed through to the running VM if they become available later. Also, if the devices become unavailable, they are removed from the VM, but the guest operating system keeps running normally. Bugzilla:1660908 Improved performance for PCI passthrough devices on IBM Z With this update, the PCI passthrough implementation on IBM Z hardware has been enhanced through multiple improvements to I/O handling. As a result, PCI devices passed through to KVM virtual machines (VMs) on IBM Z hosts now have significantly better performance. In addition, ISM devices can now be assigned to VMs on IBM Z hosts. Bugzilla:1664379 RHEL 8 guests now support SEV-SNP On virtual machines (VMs) that use RHEL 8 as a guest operating system, you can now use AMD Secure Encrypted Virtualization (SEV) with the Secure Nested Paging (SNP) feature. Among other benefits, SNP enhances SEV by improving its memory integrity protection, which helps prevent hypervisor-based attacks such as data replay or memory re-mapping. Note that for SEV-SNP to work on a RHEL 8 VM, the host running the VM must support SEV-SNP as well. Bugzilla:2087262 zPCI device assignment It is now possible to attach zPCI devices as pass-through devices to virtual machines (VMs) hosted on RHEL running on IBM Z hardware. For example, this enables the use of NVMe flash drives in VMs. Jira:RHELPLAN-59528 4.17. Supportability The sos utility is moving to a 4-week update cadence Instead of releasing sos updates with RHEL minor releases, the sos utility release cadence is changing from 6 months to 4 weeks. You can find details about the updates for the sos package in the RPM changelog every 4 weeks or you can read a summary of sos updates in the RHEL Release Notes every 6 months. Bugzilla:2164987 The sos clean command now obfuscates IPv6 addresses Previously, the sos clean command did not obfuscate IPv6 addresses, leaving some customer-sensitive data in the collected sos report. With this update, sos clean detects and obfuscates IPv6 addresses as expected. Bugzilla:2134906 4.18. Containers New podman RHEL System Role is now available Beginning with Podman 4.2, you can use the podman System Role to manage Podman configuration, containers, and systemd services that run Podman containers. Jira:RHELPLAN-118698 Podman now supports events for auditing Beginning with Podman v4.4, you can gather all relevant information about a container directly from a single event and journald entry. To enable Podman auditing, modify the container.conf configuration file and add the events_container_create_inspect_data=true option to the [engine] section. The data is in JSON format, the same as from the podman container inspect command. For more information, see How to use new container events and auditing features in Podman 4.4 . Jira:RHELPLAN-136601 The Container Tools packages have been updated The updated Container Tools packages, which contain the Podman, Buildah, Skopeo, crun, and runc tools, are now available. This update applies a series of bug fixes and enhancements over the version. Notable changes in Podman v4.4 include: Introduce Quadlet, a new systemd-generator that easily creates and maintains systemd services using Podman. A new command, podman network update , has been added, which updates networks for containers and pods. A new command, podman buildx version , has been added, which shows the buildah version. Containers can now have startup healthchecks, allowing a command to be run to ensure the container is fully started before the regular healthcheck is activated. Support a custom DNS server selection using the podman --dns command. Creating and verifying sigstore signatures using Fulcio and Rekor is now available. Improved compatibility with Docker (new options and aliases). Improved Podman's Kubernetes integration - the commands podman kube generate and podman kube play are now available and replace the podman generate kube and podman play kube commands. The podman generate kube and podman play kube commands are still available but it is recommended to use the new podman kube commands. Systemd-managed pods created by the podman kube play command now integrate with sd-notify, using the io.containers.sdnotify annotation (or io.containers.sdnotify/USDname for specific containers). Systemd-managed pods created by podman kube play can now be auto-updated, using the io.containers.auto-update annotation (or io.containers.auto-update/USDname for specific containers). Podman has been upgraded to version 4.4, for further information about notable changes, see upstream release notes . Jira:RHELPLAN-136608 Aardvark and Netavark now support custom DNS server selection The Aardvark and Netavark network stack now support custom DNS server selection for containers instead of the default DNS servers on the host. You have two options for specifying the custom DNS server: Add the dns_servers field in the containers.conf configuration file. Use the new --dns Podman option to specify an IP address of the DNS server. The --dns option overrides the values in the container.conf file. Jira:RHELPLAN-138025 Skopeo now supports generating sigstore key pairs You can use the skopeo generate-sigstore-key command to generate a sigstore public/private key pair. For more information, see skopeo-generate-sigstore-key man page. Jira:RHELPLAN-151481 Toolbox is now available With the toolbox utility, you can use the containerized command-line environment without installing troubleshooting tools directly on your system. Toolbox is built on top of Podman and other standard container technologies from OCI. For more information, see toolbx . Jira:RHELPLAN-150266 The capability for multiple trusted GPG keys for signing images is available The /etc/containers/policy.json file supports a new keyPaths field which accepts a list of files containing the trusted keys. Because of this, the container images signed with Red Hat's General Availability and Beta GPG keys are now accepted in the default configuration. For example: Jira:RHELPLAN-118470 RHEL 8 Extended Update Support The RHEL Container Tools are now supported in RHEL 8 Extended Update Support (EUS) releases. More information on Red Hat Enterprise Linux EUS is available in Container Tools AppStream - Content Availability , Red Hat Enterprise Linux (RHEL) Extended Update Support (EUS) Overview . Jira:RHELPLAN-151121 The sigstore signatures are now available Beginning with Podman 4.2, you can use the sigstore format of container image signatures. The sigstore signatures are stored in the container registry together with the container image without the need to have a separate signature server to store image signatures. Jira:RHELPLAN-75165 Podman now supports the pre-execution hooks The root-owned plugin scripts located in the /usr/libexec/podman/pre-exec-hooks and /etc/containers/pre-exec-hooks directories define a fine-control over container operations, especially blocking unauthorized actions. The /etc/containers/podman_preexec_hooks.txt file must be created by an administrator and can be empty. If /etc/containers/podman_preexec_hooks.txt does not exist, the plugin scripts will not be executed. If all plugin scripts return zero value, then the podman command is executed, otherwise, the podman command exits with the inherited exit code. Red Hat recommends using the following naming convention to execute the scripts in the correct order: DDD- plugin_name . lang , for example 010-check-group.py . Note that the plugin scripts are valid at the time of creation. Containers created before plugin scripts are not affected. Bugzilla:2119200
[ "[[packages]] name = \"microshift\" version = \"*\" [customizations.services] enabled = [\"microshift\"] [[customizations.firewall.zones]] name = \"trusted\" sources = [\"10.42.0.0/16\", \"169.254.169.1\"]", "Possible SYN flooding on port <ip_address>:<port>.", "rteval --summarize rteval-<date>-N.tar.bz2", "oslat -b 32 -D 10s -W 100 -z -c 1-4", "yum install python3.11 yum install python3.11-pip", "python3.11 python3.11 -m pip --help", "yum module install nginx:1.22", "SELECT ('{ \"postgres\": { \"release\": 15 }}'::jsonb)['postgres']['release'];", "postgres=# CREATE USER mydbuser; postgres=# GRANT ALL ON SCHEMA public TO mydbuser; postgres=# \\c postgres mydbuser postgres=USD CREATE TABLE mytable (id int);", "yum module install postgresql:15", "yum module install swig:4.1", "yum module install jaxb:4", "yum install gcc-toolset-12", "scl enable gcc-toolset-12 tool", "scl enable gcc-toolset-12 bash", "GLIBC_TUNABLES=glibc.rtld.dynamic_sort=1 export GLIBC_TUNABLES", "ipa-client-install --pkinit-identity=FILE:/path/to/cert.pem,/path/to/key.pem --pkinit-anchor=FILE:/path/to/cacerts.pem", "`pamModuleIsThreadSafe: yes`", "\"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPaths\": [\"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\", \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta\"] } ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.8_release_notes/new-features
Chapter 4. Examples
Chapter 4. Examples This chapter demonstrates the use of Red Hat build of Apache Qpid Proton DotNet through example programs. The examples are available from the source package or upstream, for more details and examples, see the Qpid Proton DotNet examples . 4.1. Sending messages This client program connects to a server using <serverHost> and <serverPort> , creates a sender for target <address> , sends 100 messages containing String and exits. Example: Sending messages using System; using Apache.Qpid.Proton.Client; 1 namespace Apache.Qpid.Proton.Examples.HelloWorld { class Program { private static readonly int MessageCount = 100; 2 static void Main(string[] args) { string serverHost = Environment.GetEnvironmentVariable("HOST") ?? "localhost"; 3 int serverPort = Convert.ToInt32(Environment.GetEnvironmentVariable("PORT") ?? "5672"); 4 string address = Environment.GetEnvironmentVariable("ADDRESS") ?? "send-receive-example"; IClient client = IClient.Create(); 5 ConnectionOptions options = new ConnectionOptions(); 6 options.User = Environment.GetEnvironmentVariable("USER"); options.Password = Environment.GetEnvironmentVariable("PASSWORD"); using IConnection connection = client.Connect(serverHost, serverPort, options); 7 using ISender sender = connection.OpenSender(address); 8 for (int i = 0; i < MessageCount; ++i) { IMessage<string> message = IMessage<string>.Create(string.Format("Hello World! [{0}]", i)); 9 ITracker tracker = sender.Send(message); 10 tracker.AwaitSettlement(); 11 Console.WriteLine(string.Format("Sent message to {0}: {1}", sender.Address, message.Body)); } } } } 1 using Apache.Qpid.Proton.Client; Imports types defined in the Proton namespace. Proton is defined by a project reference to library file Proton.Net.dll and provides all the classes, interfaces, and value types associated with Red Hat build of Apache Qpid Proton DotNet. 2 The number of messages to send. 3 serverHost is the network address of the host or virtual host for the AMQP connection and can be configured by setting the Environment variable 'HOST'. 4 serverPort is the port on the host that the broker is accepting connections and can be configured by setting the environment variable PORT . 5 Client is the container that can create multiple Connections to a broker. 6 options is used for various setting, including 'User' and 'Password'. See Section 5.1, "Connection Options" for more information. 7 connection is the AMQP Connection to a broker. 8 Create a sender for transferring messages to the broker. 9 In the message send loop a new message is created. 10 The message is sent to the broker. 11 Wait for the broker to settle the message. Running the example To run the example program, compile it and execute it from the command line. For more information, see Chapter 3, Getting started . <source-dir> \bin\Debug>Example.Send 4.2. Receiving messages This client program connects to a server using <connection-url> , creates a receiver for source <address> , and receives messages until it is terminated or it reaches <count> messages. Example: Receiving messages using System; using Apache.Qpid.Proton.Client; 1 namespace Apache.Qpid.Proton.Examples.HelloWorld { class Program { private static readonly int MessageCount = 100; 2 static void Main(string[] args) { string serverHost = Environment.GetEnvironmentVariable("HOST") ?? "localhost"; 3 int serverPort = Convert.ToInt32(Environment.GetEnvironmentVariable("PORT") ?? "5672"); 4 string address = Environment.GetEnvironmentVariable("ADDRESS") ?? "send-receive-example"; IClient client = IClient.Create(); 5 ConnectionOptions options = new ConnectionOptions(); 6 options.User = Environment.GetEnvironmentVariable("USER"); options.Password = Environment.GetEnvironmentVariable("PASSWORD"); using IConnection connection = client.Connect(serverHost, serverPort, options); 7 using IReceiver receiver = connection.OpenReceiver(address); 8 for (int i = 0; i < MessageCount; ++i) { IDelivery delivery = receiver.Receive(); 9 IMessage<object> received = delivery.Message(); 10 Console.WriteLine("Received message with body: " + received.Body); } } } } 1 using Apache.Qpid.Proton.Client; Imports types defined in the Proton namespace. Proton is defined by a project reference to library file Proton.Net.dll and provides all the classes, interfaces, and value types associated with Red Hat build of Apache Qpid Proton DotNet. 2 The number of messages to receive. 3 serverHost is the network address of the host or virtual host for the AMQP connection and can be configured by setting the Environment variable HOST . 4 serverPort is the port on the host that the broker is accepting connections and can be configured by setting the environment variable PORT . 5 Client is the container that can create multiple Connections to a broker. 6 options is used for various setting, including 'User' and 'Password'. See Section 5.1, "Connection Options" for more information. 7 connection is the AMQP Connection to a broker. 8 Create a receiver for receiving messages from the broker. 9 In the message receive loop a new delivery is received. 10 The message is obtained from the delivery . Running the example To run the example program, compile it and execute it from the command line. For more information, see Chapter 3, Getting started . <source-dir> \bin\Debug>Example.Receive
[ "using System; using Apache.Qpid.Proton.Client; 1 namespace Apache.Qpid.Proton.Examples.HelloWorld { class Program { private static readonly int MessageCount = 100; 2 static void Main(string[] args) { string serverHost = Environment.GetEnvironmentVariable(\"HOST\") ?? \"localhost\"; 3 int serverPort = Convert.ToInt32(Environment.GetEnvironmentVariable(\"PORT\") ?? \"5672\"); 4 string address = Environment.GetEnvironmentVariable(\"ADDRESS\") ?? \"send-receive-example\"; IClient client = IClient.Create(); 5 ConnectionOptions options = new ConnectionOptions(); 6 options.User = Environment.GetEnvironmentVariable(\"USER\"); options.Password = Environment.GetEnvironmentVariable(\"PASSWORD\"); using IConnection connection = client.Connect(serverHost, serverPort, options); 7 using ISender sender = connection.OpenSender(address); 8 for (int i = 0; i < MessageCount; ++i) { IMessage<string> message = IMessage<string>.Create(string.Format(\"Hello World! [{0}]\", i)); 9 ITracker tracker = sender.Send(message); 10 tracker.AwaitSettlement(); 11 Console.WriteLine(string.Format(\"Sent message to {0}: {1}\", sender.Address, message.Body)); } } } }", "<source-dir> \\bin\\Debug>Example.Send", "using System; using Apache.Qpid.Proton.Client; 1 namespace Apache.Qpid.Proton.Examples.HelloWorld { class Program { private static readonly int MessageCount = 100; 2 static void Main(string[] args) { string serverHost = Environment.GetEnvironmentVariable(\"HOST\") ?? \"localhost\"; 3 int serverPort = Convert.ToInt32(Environment.GetEnvironmentVariable(\"PORT\") ?? \"5672\"); 4 string address = Environment.GetEnvironmentVariable(\"ADDRESS\") ?? \"send-receive-example\"; IClient client = IClient.Create(); 5 ConnectionOptions options = new ConnectionOptions(); 6 options.User = Environment.GetEnvironmentVariable(\"USER\"); options.Password = Environment.GetEnvironmentVariable(\"PASSWORD\"); using IConnection connection = client.Connect(serverHost, serverPort, options); 7 using IReceiver receiver = connection.OpenReceiver(address); 8 for (int i = 0; i < MessageCount; ++i) { IDelivery delivery = receiver.Receive(); 9 IMessage<object> received = delivery.Message(); 10 Console.WriteLine(\"Received message with body: \" + received.Body); } } } }", "<source-dir> \\bin\\Debug>Example.Receive" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_proton_dotnet/1.0/html/using_qpid_proton_dotnet/examples
Automation content navigator creator guide
Automation content navigator creator guide Red Hat Ansible Automation Platform 2.4 Develop content that is compatible with Ansible Automation Platform Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_content_navigator_creator_guide/index
Preface
Preface As a developer of business decisions , you must deploy a developed Red Hat Decision Manager project to a KIE Server in order to begin using the services you have created in Red Hat Decision Manager. You can deploy and manage your Red Hat Decision Manager projects and assets using the Business Central interface or using KIE APIs.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/pr01
Chapter 29. flavor
Chapter 29. flavor This chapter describes the commands under the flavor command. 29.1. flavor create Create new flavor Usage: Table 29.1. Positional arguments Value Summary <flavor-name> New flavor name Table 29.2. Command arguments Value Summary -h, --help Show this help message and exit --id <id> Unique flavor id --ram <size-mb> Memory size in mb (default 256m) --disk <size-gb> Disk size in gb (default 0g) --ephemeral <size-gb> Ephemeral disk size in gb (default 0g) --swap <size-mb> Additional swap space size in mb (default 0m) --vcpus <vcpus> Number of vcpus (default 1) --rxtx-factor <factor> Rx/tx factor (default 1.0) --public Flavor is available to other projects (default) --private Flavor is not available to other projects --property <key=value> Property to add for this flavor (repeat option to set multiple properties) --project <project> Allow <project> to access private flavor (name or id) (Must be used with --private option) --description <description> Description for the flavor.(supported by api versions 2.55 - 2.latest --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 29.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 29.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 29.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 29.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 29.2. flavor delete Delete flavor(s) Usage: Table 29.7. Positional arguments Value Summary <flavor> Flavor(s) to delete (name or id) Table 29.8. Command arguments Value Summary -h, --help Show this help message and exit 29.3. flavor list List flavors Usage: Table 29.9. Command arguments Value Summary -h, --help Show this help message and exit --public List only public flavors (default) --private List only private flavors --all List all flavors, whether public or private --min-disk <min-disk> Filters the flavors by a minimum disk space, in gib. --min-ram <min-ram> Filters the flavors by a minimum ram, in mib. --long List additional fields in output --marker <flavor-id> The last flavor id of the page --limit <num-flavors> Maximum number of flavors to display. this is also configurable on the server. The actual limit used will be the lower of the user-supplied value and the server configuration-derived value Table 29.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 29.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 29.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 29.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 29.4. flavor set Set flavor properties Usage: Table 29.14. Positional arguments Value Summary <flavor> Flavor to modify (name or id) Table 29.15. Command arguments Value Summary -h, --help Show this help message and exit --no-property Remove all properties from this flavor (specify both --no-property and --property to remove the current properties before setting new properties.) --property <key=value> Property to add or modify for this flavor (repeat option to set multiple properties) --project <project> Set flavor access to project (name or id) (admin only) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --description <description> Set description for the flavor.(supported by api versions 2.55 - 2.latest 29.5. flavor show Display flavor details Usage: Table 29.16. Positional arguments Value Summary <flavor> Flavor to display (name or id) Table 29.17. Command arguments Value Summary -h, --help Show this help message and exit Table 29.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 29.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 29.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 29.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 29.6. flavor unset Unset flavor properties Usage: Table 29.22. Positional arguments Value Summary <flavor> Flavor to modify (name or id) Table 29.23. Command arguments Value Summary -h, --help Show this help message and exit --property <key> Property to remove from flavor (repeat option to unset multiple properties) --project <project> Remove flavor access from project (name or id) (admin only) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist.
[ "openstack flavor create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--id <id>] [--ram <size-mb>] [--disk <size-gb>] [--ephemeral <size-gb>] [--swap <size-mb>] [--vcpus <vcpus>] [--rxtx-factor <factor>] [--public | --private] [--property <key=value>] [--project <project>] [--description <description>] [--project-domain <project-domain>] <flavor-name>", "openstack flavor delete [-h] <flavor> [<flavor> ...]", "openstack flavor list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--public | --private | --all] [--min-disk <min-disk>] [--min-ram <min-ram>] [--long] [--marker <flavor-id>] [--limit <num-flavors>]", "openstack flavor set [-h] [--no-property] [--property <key=value>] [--project <project>] [--project-domain <project-domain>] [--description <description>] <flavor>", "openstack flavor show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <flavor>", "openstack flavor unset [-h] [--property <key>] [--project <project>] [--project-domain <project-domain>] <flavor>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/flavor
Chapter 9. Guest virtual machine device configuration
Chapter 9. Guest virtual machine device configuration Red Hat Enterprise Linux 6 supports three classes of devices for guest virtual machines: Emulated devices are purely virtual devices that mimic real hardware, allowing unmodified guest operating systems to work with them using their standard in-box drivers. Red Hat Enterprise Linux 6 supports up to 216 virtio devices. Virtio devices are purely virtual devices designed to work optimally in a virtual machine. Virtio devices are similar to emulated devices, however, non-Linux virtual machines do not include the drivers they require by default. Virtualization management software like the Virtual Machine Manager ( virt-manager ) and the Red Hat Virtualization Hypervisor (RHV-H) install these drivers automatically for supported non-Linux guest operating systems. Red Hat Enterprise Linux 6 supports up to 700 scsi disks. Assigned devices are physical devices that are exposed to the virtual machine. This method is also known as 'passthrough'. Device assignment allows virtual machines exclusive access to PCI devices for a range of tasks, and allows PCI devices to appear and behave as if they were physically attached to the guest operating system. Red Hat Enterprise Linux 6 supports up to 32 assigned devices per virtual machine. Device assignment is supported on PCIe devices, including select graphics devices. Nvidia K-series Quadro, GRID, and Tesla graphics card GPU functions are now supported with device assignment in Red Hat Enterprise Linux 6. Parallel PCI devices may be supported as assigned devices, but they have severe limitations due to security and system configuration conflicts. Note The number of devices that can be attached to a virtual machine depends on several factors. One factor is the number of files open by the QEMU process (configured in /etc/security/limits.conf , which can be overridden by /etc/libvirt/qemu.conf ). Other limitation factors include the number of slots available on the virtual bus, as well as the system-wide limit on open files set by sysctl. For more information on specific devices and for limitations refer to Section 20.16, "Devices" . Red Hat Enterprise Linux 6 supports PCI hot plug of devices exposed as single function slots to the virtual machine. Single function host devices and individual functions of multi-function host devices may be configured to enable this. Configurations exposing devices as multi-function PCI slots to the virtual machine are recommended only for non-hotplug applications. Note Platform support for interrupt remapping is required to fully isolate a guest with assigned devices from the host. Without such support, the host may be vulnerable to interrupt injection attacks from a malicious guest. In an environment where guests are trusted, the admin may opt-in to still allow PCI device assignment using the allow_unsafe_interrupts option to the vfio_iommu_type1 module. This may either be done persistently by adding a .conf file (for example local.conf ) to /etc/modprobe.d containing the following: or dynamically using the sysfs entry to do the same: 9.1. PCI Devices PCI device assignment is only available on hardware platforms supporting either Intel VT-d or AMD IOMMU. These Intel VT-d or AMD IOMMU specifications must be enabled in BIOS for PCI device assignment to function. Procedure 9.1. Preparing an Intel system for PCI device assignment Enable the Intel VT-d specifications The Intel VT-d specifications provide hardware support for directly assigning a physical device to a virtual machine. These specifications are required to use PCI device assignment with Red Hat Enterprise Linux. The Intel VT-d specifications must be enabled in the BIOS. Some system manufacturers disable these specifications by default. The terms used to refer to these specifications can differ between manufacturers; consult your system manufacturer's documentation for the appropriate terms. Activate Intel VT-d in the kernel Activate Intel VT-d in the kernel by adding the intel_iommu=on parameter to the end of the GRUB_CMDLINX_LINUX line, within the quotes, in the /etc/sysconfig/grub file. The example below is a modified grub file with Intel VT-d activated. Regenerate config file Regenerate /etc/grub2.cfg by running: Note that if you are using a UEFI-based host, the target file should be /etc/grub2-efi.cfg . Ready to use Reboot the system to enable the changes. Your system is now capable of PCI device assignment. Procedure 9.2. Preparing an AMD system for PCI device assignment Enable the AMD IOMMU specifications The AMD IOMMU specifications are required to use PCI device assignment in Red Hat Enterprise Linux. These specifications must be enabled in the BIOS. Some system manufacturers disable these specifications by default. Enable IOMMU kernel support Append amd_iommu=on to the end of the GRUB_CMDLINX_LINUX line, within the quotes, in /etc/sysconfig/grub so that AMD IOMMU specifications are enabled at boot. Regenerate config file Regenerate /etc/grub2.cfg by running: Note that if you are using a UEFI-based host, the target file should be /etc/grub2-efi.cfg . Ready to use Reboot the system to enable the changes. Your system is now capable of PCI device assignment. 9.1.1. Assigning a PCI Device with virsh These steps cover assigning a PCI device to a virtual machine on a KVM hypervisor. This example uses a PCIe network controller with the PCI identifier code, pci_0000_01_00_0 , and a fully virtualized guest machine named guest1-rhel6-64 . Procedure 9.3. Assigning a PCI device to a guest virtual machine with virsh Identify the device First, identify the PCI device designated for device assignment to the virtual machine. Use the lspci command to list the available PCI devices. You can refine the output of lspci with grep . This example uses the Ethernet controller highlighted in the following output: This Ethernet controller is shown with the short identifier 00:19.0 . We need to find out the full identifier used by virsh in order to assign this PCI device to a virtual machine. To do so, use the virsh nodedev-list command to list all devices of a particular type ( pci ) that are attached to the host machine. Then look at the output for the string that maps to the short identifier of the device you wish to use. This example highlights the string that maps to the Ethernet controller with the short identifier 00:19.0 . In this example, the : and . characters are replaced with underscores in the full identifier. Record the PCI device number that maps to the device you want to use; this is required in other steps. Review device information Information on the domain, bus, and function are available from output of the virsh nodedev-dumpxml command: Note An IOMMU group is determined based on the visibility and isolation of devices from the perspective of the IOMMU. Each IOMMU group may contain one or more devices. When multiple devices are present, all endpoints within the IOMMU group must be claimed for any device within the group to be assigned to a guest. This can be accomplished either by also assigning the extra endpoints to the guest or by detaching them from the host driver using virsh nodedev-detach . Devices contained within a single group may not be split between multiple guests or split between host and guest. Non-endpoint devices such as PCIe root ports, switch ports, and bridges should not be detached from the host drivers and will not interfere with assignment of endpoints. Devices within an IOMMU group can be determined using the iommuGroup section of the virsh nodedev-dumpxml output. Each member of the group is provided via a separate "address" field. This information may also be found in sysfs using the following: An example of the output from this would be: To assign only 0000.01.00.0 to the guest, the unused endpoint should be detached from the host before starting the guest: Determine required configuration details Refer to the output from the virsh nodedev-dumpxml pci_0000_00_19_0 command for the values required for the configuration file. The example device has the following values: bus = 0, slot = 25 and function = 0. The decimal configuration uses those three values: Add configuration details Run virsh edit , specifying the virtual machine name, and add a device entry in the <source> section to assign the PCI device to the guest virtual machine. Alternately, run virsh attach-device , specifying the virtual machine name and the guest's XML file: Start the virtual machine The PCI device should now be successfully assigned to the virtual machine, and accessible to the guest operating system. 9.1.2. Assigning a PCI Device with virt-manager PCI devices can be added to guest virtual machines using the graphical virt-manager tool. The following procedure adds a Gigabit Ethernet controller to a guest virtual machine. Procedure 9.4. Assigning a PCI device to a guest virtual machine using virt-manager Open the hardware settings Open the guest virtual machine and click the Add Hardware button to add a new device to the virtual machine. Figure 9.1. The virtual machine hardware information window Select a PCI device Select PCI Host Device from the Hardware list on the left. Select an unused PCI device. If you select a PCI device that is in use by another guest an error may result. In this example, a spare 82576 network device is used. Click Finish to complete setup. Figure 9.2. The Add new virtual hardware wizard Add the new device The setup is complete and the guest virtual machine now has direct access to the PCI device. Figure 9.3. The virtual machine hardware information window Note If device assignment fails, there may be other endpoints in the same IOMMU group that are still attached to the host. There is no way to retrieve group information using virt-manager, but virsh commands can be used to analyze the bounds of the IOMMU group and if necessary sequester devices. Refer to the Note in Section 9.1.1, "Assigning a PCI Device with virsh" for more information on IOMMU groups and how to detach endpoint devices using virsh. 9.1.3. PCI Device Assignment with virt-install To use virt-install to assign a PCI device, use the --host-device parameter. Procedure 9.5. Assigning a PCI device to a virtual machine with virt-install Identify the device Identify the PCI device designated for device assignment to the guest virtual machine. The virsh nodedev-list command lists all devices attached to the system, and identifies each PCI device with a string. To limit output to only PCI devices, run the following command: Record the PCI device number; the number is needed in other steps. Information on the domain, bus and function are available from output of the virsh nodedev-dumpxml command: Note If there are multiple endpoints in the IOMMU group and not all of them are assigned to the guest, you will need to manually detach the other endpoint(s) from the host by running the following command before you start the guest: Refer to the Note in Section 9.1.1, "Assigning a PCI Device with virsh" for more information on IOMMU groups. Add the device Use the PCI identifier output from the virsh nodedev command as the value for the --host-device parameter. Complete the installation Complete the guest installation. The PCI device should be attached to the guest. 9.1.4. Detaching an Assigned PCI Device When a host PCI device has been assigned to a guest machine, the host can no longer use the device. Read this section to learn how to detach the device from the guest with virsh or virt-manager so it is available for host use. Procedure 9.6. Detaching a PCI device from a guest with virsh Detach the device Use the following command to detach the PCI device from the guest by removing it in the guest's XML file: Re-attach the device to the host (optional) If the device is in managed mode, skip this step. The device will be returned to the host automatically. If the device is not using managed mode, use the following command to re-attach the PCI device to the host machine: For example, to re-attach the pci_0000_01_00_0 device to the host: The device is now available for host use. Procedure 9.7. Detaching a PCI Device from a guest with virt-manager Open the virtual hardware details screen In virt-manager , double-click on the virtual machine that contains the device. Select the Show virtual hardware details button to display a list of virtual hardware. Figure 9.4. The virtual hardware details button Select and remove the device Select the PCI device to be detached from the list of virtual devices in the left panel. Figure 9.5. Selecting the PCI device to be detached Click the Remove button to confirm. The device is now available for host use. 9.1.5. Creating PCI Bridges Peripheral Component Interconnects (PCI) bridges are used to attach to devices such as network cards, modems and sound cards. Just like their physical counterparts, virtual devices can also be attached to a PCI Bridge. In the past, only 31 PCI devices could be added to any guest virtual machine. Now, when a 31st PCI device is added, a PCI bridge is automatically placed in the 31st slot moving the additional PCI device to the PCI bridge. Each PCI bridge has 31 slots for 31 additional devices, all of which can be bridges. In this manner, over 900 devices can be available for guest virtual machines. Note This action cannot be performed when the guest virtual machine is running. You must add the PCI device on a guest virtual machine that is shutdown. 9.1.6. PCI Passthrough A PCI network device (specified by the <source> element) is directly assigned to the guest using generic device passthrough , after first optionally setting the device's MAC address to the configured value, and associating the device with an 802.1Qbh capable switch using an optionally specified <virtualport> element (see the examples of virtualport given above for type='direct' network devices). Due to limitations in standard single-port PCI ethernet card driver design - only SR-IOV (Single Root I/O Virtualization) virtual function (VF) devices can be assigned in this manner; to assign a standard single-port PCI or PCIe Ethernet card to a guest, use the traditional <hostdev> device definition. To use VFIO device assignment rather than traditional/legacy KVM device assignment (VFIO is a new method of device assignment that is compatible with UEFI Secure Boot), a <type='hostdev'> interface can have an optional driver sub-element with a name attribute set to "vfio". To use legacy KVM device assignment you can set name to "kvm" (or simply omit the <driver> element, since <driver='kvm'> is currently the default). Note Intelligent passthrough of network devices is very similar to the functionality of a standard <hostdev> device, the difference being that this method allows specifying a MAC address and <virtualport> for the passed-through device. If these capabilities are not required, if you have a standard single-port PCI, PCIe, or USB network card that does not support SR-IOV (and hence would anyway lose the configured MAC address during reset after being assigned to the guest domain), or if you are using a version of libvirt older than 0.9.11, you should use standard <hostdev> to assign the device to the guest instead of <interface type='hostdev'/> . <devices> <interface type='hostdev'> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </source> <mac address='52:54:00:6d:90:02'> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices> Figure 9.6. XML example for PCI device assignment 9.1.7. Configuring PCI Assignment (Passthrough) with SR-IOV Devices This section is for SR-IOV devices only. SR-IOV network cards provide multiple Virtual Functions (VFs) that can each be individually assigned to a guest virtual machines using PCI device assignment. Once assigned, each will behave as a full physical network device. This permits many guest virtual machines to gain the performance advantage of direct PCI device assignment, while only using a single slot on the host physical machine. These VFs can be assigned to guest virtual machines in the traditional manner using the element <hostdev> , but as SR-IOV VF network devices do not have permanent unique MAC addresses, it causes issues where the guest virtual machine's network settings would have to be re-configured each time the host physical machine is rebooted. To remedy this, you would need to set the MAC address prior to assigning the VF to the host physical machine and you would need to set this each and every time the guest virtual machine boots. In order to assign this MAC address as well as other options, refer to the procedure described in Procedure 9.8, "Configuring MAC addresses, vLAN, and virtual ports for assigning PCI devices on SR-IOV" . Procedure 9.8. Configuring MAC addresses, vLAN, and virtual ports for assigning PCI devices on SR-IOV It is important to note that the <hostdev> element cannot be used for function-specific items like MAC address assignment, vLAN tag ID assignment, or virtual port assignment because the <mac> , <vlan> , and <virtualport> elements are not valid children for <hostdev> . As they are valid for <interface> , support for a new interface type was added ( <interface type='hostdev'> ). This new interface device type behaves as a hybrid of an <interface> and <hostdev> . Thus, before assigning the PCI device to the guest virtual machine, libvirt initializes the network-specific hardware/switch that is indicated (such as setting the MAC address, setting a vLAN tag, or associating with an 802.1Qbh switch) in the guest virtual machine's XML configuration file. For information on setting the vLAN tag, refer to Section 18.14, "Setting vLAN Tags" . Shutdown the guest virtual machine Using virsh shutdown command (refer to Section 14.9.1, "Shutting Down a Guest Virtual Machine" ), shutdown the guest virtual machine named guestVM . # virsh shutdown guestVM Gather information In order to use <interface type='hostdev'> , you must have an SR-IOV-capable network card, host physical machine hardware that supports either the Intel VT-d or AMD IOMMU extensions, and you must know the PCI address of the VF that you wish to assign. Open the XML file for editing Run the # virsh save-image-edit command to open the XML file for editing (refer to Section 14.8.10, "Edit Domain XML Configuration Files" for more information). As you would want to restore the guest virtual machine to its former running state, the --running would be used in this case. The name of the configuration file in this example is guestVM.xml , as the name of the guest virtual machine is guestVM . # virsh save-image-edit guestVM.xml --running The guestVM.xml opens in your default editor. Edit the XML file Update the configuration file ( guestVM.xml ) to have a <devices> entry similar to the following: <devices> ... <interface type='hostdev' managed='yes'> <source> <address type='pci' domain='0x0' bus='0x00' slot='0x07' function='0x0'/> <!--these values can be decimal as well--> </source> <mac address='52:54:00:6d:90:02'/> <!--sets the mac address--> <virtualport type='802.1Qbh'> <!--sets the virtual port for the 802.1Qbh switch--> <parameters profileid='finance'/> </virtualport> <vlan> <!--sets the vlan tag--> <tag id='42'/> </vlan> </interface> ... </devices> Figure 9.7. Sample domain XML for hostdev interface type Note that if you do not provide a MAC address, one will be automatically generated, just as with any other type of interface device. Also, the <virtualport> element is only used if you are connecting to an 802.11Qgh hardware switch (802.11Qbg (a.k.a. "VEPA") switches are currently not supported. Re-start the guest virtual machine Run the virsh start command to restart the guest virtual machine you shutdown in the first step (example uses guestVM as the guest virtual machine's domain name). Refer to Section 14.8.1, "Starting a Defined Domain" for more information. # virsh start guestVM When the guest virtual machine starts, it sees the network device provided to it by the physical host machine's adapter, with the configured MAC address. This MAC address will remain unchanged across guest virtual machine and host physical machine reboots. 9.1.8. Setting PCI Device Assignment from a Pool of SR-IOV Virtual Functions Hard coding the PCI addresses of a particular Virtual Functions (VFs) into a guest's configuration has two serious limitations: The specified VF must be available any time the guest virtual machine is started, implying that the administrator must permanently assign each VF to a single guest virtual machine (or modify the configuration file for every guest virtual machine to specify a currently unused VF's PCI address each time every guest virtual machine is started). If the guest virtual machine is moved to another host physical machine, that host physical machine must have exactly the same hardware in the same location on the PCI bus (or, again, the guest virtual machine configuration must be modified prior to start). It is possible to avoid both of these problems by creating a libvirt network with a device pool containing all the VFs of an SR-IOV device. Once that is done you would configure the guest virtual machine to reference this network. Each time the guest is started, a single VF will be allocated from the pool and assigned to the guest virtual machine. When the guest virtual machine is stopped, the VF will be returned to the pool for use by another guest virtual machine. Procedure 9.9. Creating a device pool Shutdown the guest virtual machine Using virsh shutdown command (refer to Section 14.9, "Shutting Down, Rebooting, and Forcing Shutdown of a Guest Virtual Machine" ), shutdown the guest virtual machine named guestVM . # virsh shutdown guestVM Create a configuration file Using your editor of choice create an XML file (named passthrough.xml , for example) in the /tmp directory. Make sure to replace pf dev='eth3' with the netdev name of your own SR-IOV device's PF The following is an example network definition that will make available a pool of all VFs for the SR-IOV adapter with its physical function (PF) at "eth3' on the host physical machine: <network> <name>passthrough</name> <!--This is the name of the file you created--> <forward mode='hostdev' managed='yes'> <pf dev='myNetDevName'/> <!--Use the netdev name of your SR-IOV devices PF here--> </forward> </network> Figure 9.8. Sample network definition domain XML Load the new XML file Run the following command, replacing /tmp/passthrough.xml , with the name and location of your XML file you created in the step: # virsh net-define /tmp/passthrough.xml Restarting the guest Run the following replacing passthrough.xml , with the name of your XML file you created in the step: # virsh net-autostart passthrough # virsh net-start passthrough Re-start the guest virtual machine Run the virsh start command to restart the guest virtual machine you shutdown in the first step (example uses guestVM as the guest virtual machine's domain name). Refer to Section 14.8.1, "Starting a Defined Domain" for more information. # virsh start guestVM Initiating passthrough for devices Although only a single device is shown, libvirt will automatically derive the list of all VFs associated with that PF the first time a guest virtual machine is started with an interface definition in its domain XML like the following: <interface type='network'> <source network='passthrough'> </interface> Figure 9.9. Sample domain XML for interface network definition Verification You can verify this by running virsh net-dumpxml passthrough command after starting the first guest that uses the network; you will get output similar to the following: <network connections='1'> <name>passthrough</name> <uuid>a6b49429-d353-d7ad-3185-4451cc786437</uuid> <forward mode='hostdev' managed='yes'> <pf dev='eth3'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x3'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x5'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x7'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x11' function='0x1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x11' function='0x3'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x11' function='0x5'/> </forward> </network> Figure 9.10. XML dump file passthrough contents
[ "options vfio_iommu_type1 allow_unsafe_interrupts=1", "echo 1 > /sys/module/vfio_iommu_type1/parameters/allow_unsafe_interrupts", "GRUB_CMDLINE_LINUX=\"rd.lvm.lv=vg_VolGroup00/LogVol01 vconsole.font=latarcyrheb-sun16 rd.lvm.lv=vg_VolGroup_1/root vconsole.keymap=us USD([ -x /usr/sbin/rhcrashkernel-param ] && /usr/sbin/ rhcrashkernel-param || :) rhgb quiet intel_iommu=on \"", "grub2-mkconfig -o /etc/grub2.cfg", "grub2-mkconfig -o /etc/grub2.cfg", "lspci | grep Ethernet 00:19.0 Ethernet controller: Intel Corporation 82567LM-2 Gigabit Network Connection 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)", "virsh nodedev-list --cap pci pci_0000_00_00_0 pci_0000_00_01_0 pci_0000_00_03_0 pci_0000_00_07_0 pci_0000_00_10_0 pci_0000_00_10_1 pci_0000_00_14_0 pci_0000_00_14_1 pci_0000_00_14_2 pci_0000_00_14_3 pci_0000_ 00_19_0 pci_0000_00_1a_0 pci_0000_00_1a_1 pci_0000_00_1a_2 pci_0000_00_1a_7 pci_0000_00_1b_0 pci_0000_00_1c_0 pci_0000_00_1c_1 pci_0000_00_1c_4 pci_0000_00_1d_0 pci_0000_00_1d_1 pci_0000_00_1d_2 pci_0000_00_1d_7 pci_0000_00_1e_0 pci_0000_00_1f_0 pci_0000_00_1f_2 pci_0000_00_1f_3 pci_0000_01_00_0 pci_0000_01_00_1 pci_0000_02_00_0 pci_0000_02_00_1 pci_0000_06_00_0 pci_0000_07_02_0 pci_0000_07_03_0", "virsh nodedev-dumpxml pci_0000_00_19_0 <device> <name>pci_0000_00_19_0</name> <parent>computer</parent> <driver> <name>e1000e</name> </driver> <capability type='pci'> <domain>0</domain> <bus>0</bus> <slot>25</slot> <function>0</function> <product id='0x1502'>82579LM Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='7'> <address domain='0x0000' bus='0x00' slot='0x19' function='0x0'/> </iommuGroup> </capability> </device>", "ls /sys/bus/pci/devices/ 0000:01:00.0 /iommu_group/devices/", "0000:01:00.0 0000:01:00.1", "virsh nodedev-detach pci_0000_01_00_1", "bus='0' slot='25' function='0'", "virsh edit guest1-rhel6-64 <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0' bus='0' slot='25' function='0'/> </source> </hostdev>", "virsh attach-device guest1-rhel6-64 file.xml", "virsh start guest1-rhel6-64", "lspci | grep Ethernet 00:19.0 Ethernet controller: Intel Corporation 82567LM-2 Gigabit Network Connection 01:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) 01:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01)", "virsh nodedev-list --cap pci pci_0000_00_00_0 pci_0000_00_01_0 pci_0000_00_03_0 pci_0000_00_07_0 pci_0000_00_10_0 pci_0000_00_10_1 pci_0000_00_14_0 pci_0000_00_14_1 pci_0000_00_14_2 pci_0000_00_14_3 pci_0000_00_19_0 pci_0000_00_1a_0 pci_0000_00_1a_1 pci_0000_00_1a_2 pci_0000_00_1a_7 pci_0000_00_1b_0 pci_0000_00_1c_0 pci_0000_00_1c_1 pci_0000_00_1c_4 pci_0000_00_1d_0 pci_0000_00_1d_1 pci_0000_00_1d_2 pci_0000_00_1d_7 pci_0000_00_1e_0 pci_0000_00_1f_0 pci_0000_00_1f_2 pci_0000_00_1f_3 pci_0000_01_00_0 pci_0000_01_00_1 pci_0000_02_00_0 pci_0000_02_00_1 pci_0000_06_00_0 pci_0000_07_02_0 pci_0000_07_03_0", "virsh nodedev-dumpxml pci_0000_01_00_0 <device> <name>pci_0000_01_00_0</name> <parent>pci_0000_00_01_0</parent> <driver> <name>igb</name> </driver> <capability type='pci'> <domain>0</domain> <bus>1</bus> <slot>0</slot> <function>0</function> <product id='0x10c9'>82576 Gigabit Network Connection</product> <vendor id='0x8086'>Intel Corporation</vendor> <iommuGroup number='7'> <address domain='0x0000' bus='0x00' slot='0x19' function='0x0'/> </iommuGroup> </capability> </device>", "virsh nodedev-detach pci_0000_00_19_1", "virt-install --name=guest1-rhel6-64 --disk path=/var/lib/libvirt/images/guest1-rhel6-64.img,size=8 --nonsparse --graphics spice --vcpus=2 --ram=2048 --location=http://example1.com/installation_tree/RHEL6.0-Server-x86_64/os --nonetworks --os-type=linux --os-variant=rhel6 --host-device= pci_0000_01_00_0", "virsh detach-device name_of_guest file.xml", "virsh nodedev-reattach device", "virsh nodedev-reattach pci_0000_01_00_0", "<devices> <interface type='hostdev'> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </source> <mac address='52:54:00:6d:90:02'> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices>", "<devices> <interface type='hostdev' managed='yes'> <source> <address type='pci' domain='0x0' bus='0x00' slot='0x07' function='0x0'/> <!--these values can be decimal as well--> </source> <mac address='52:54:00:6d:90:02'/> <!--sets the mac address--> <virtualport type='802.1Qbh'> <!--sets the virtual port for the 802.1Qbh switch--> <parameters profileid='finance'/> </virtualport> <vlan> <!--sets the vlan tag--> <tag id='42'/> </vlan> </interface> </devices>", "<network> <name>passthrough</name> <!--This is the name of the file you created--> <forward mode='hostdev' managed='yes'> <pf dev='myNetDevName'/> <!--Use the netdev name of your SR-IOV devices PF here--> </forward> </network>", "<interface type='network'> <source network='passthrough'> </interface>", "<network connections='1'> <name>passthrough</name> <uuid>a6b49429-d353-d7ad-3185-4451cc786437</uuid> <forward mode='hostdev' managed='yes'> <pf dev='eth3'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x3'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x5'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x7'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x11' function='0x1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x11' function='0x3'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x11' function='0x5'/> </forward> </network>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-Guest_virtual_machine_device_configuration
Chapter 5. Installing RHEL AI on Google Cloud Platform (GCP) (Technology preview)
Chapter 5. Installing RHEL AI on Google Cloud Platform (GCP) (Technology preview) For installing and deploying Red Hat Enterprise Linux AI on Google Cloud Platform, you must first convert the RHEL AI image into an GCP image. You can then launch an instance using the GCP image and deploy RHEL AI on a Google Cloud Platform machine. Important Installing Red Hat Enterprise Linux AI on GCP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.1. Converting the RHEL AI image into a Google Cloud Platform image. To create a bootable image in Google Cloud Platform you must configure your Google Cloud Platform account, create an Google Cloud Storage bucket, and create an Google Cloud Platform image using the RHEL AI raw image. Prerequisites You installed the Google Cloud Platform CLI on your specific machine. For more information about installing the GCP CLI, see Install the Google Cloud Platform CLI on Linux . You must be on a Red Hat Enterprise Linux version 9.2 - 9.4 system Your machine must have an additional 100 GB of disk space. Procedure Log in to Google Cloud Platform with the following command: USD gcloud auth login Example output of the login. USD gcloud auth login Your browser has been opened to visit: https://accounts.google.com/o/oauth2/auth?XXXXXXXXXXXXXXXXXXXX You are now logged in as [[email protected]]. Your current project is [your-project]. You can change this setting by running: USD gcloud config set project PROJECT_ID You need to set up some Google Cloud Platform configurations and create your GCP Storage Container before creating the GCP image. Configure Google Cloud Platform CLI to use your project. USD gcloud_project=your-gcloud-project USD gcloud config set project USDgcloud_project Create an environment variable defining the region where you want to operate. USD gcloud_region=us-central1 Create a Google Cloud Platform Storage Container. USD gcloud_bucket=name-for-your-bucket USD gsutil mb -l USDgcloud_region gs://USDgcloud_bucket Red Hat currently does not provide RHEL AI Google Cloud Platform images. You need to create a GCP disk image using RHEL AI bootc image as base. Create this Containerfile file, using the appropriate version of RHEL AI in the FROM line. FROM registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.2 RUN eval USD(grep VERSION_ID /etc/os-release) \ && echo -e "[google-compute-engine]\nname=Google Compute Engine\nbaseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-elUSD{VERSION_ID/.*}-x86_64-stable\nenabled=1\ngpgcheck=1\nrepo_gpgcheck=0\ngpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg\n https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg" > /etc/yum.repos.d/google-cloud.repo \ && dnf install -y --nobest \ acpid \ cloud-init \ google-compute-engine \ google-osconfig-agent \ langpacks-en \ rng-tools \ timedatex \ tuned \ vim \ && curl -sSo /tmp/add-google-cloud-ops-agent-repo.sh https://dl.google.com/cloudagents/add-google-cloud-ops-agent-repo.sh \ && bash /tmp/add-google-cloud-ops-agent-repo.sh --also-install --remove-repo \ && rm /tmp/add-google-cloud-ops-agent-repo.sh \ && mkdir -p /var/lib/rpm-state \ && dnf remove -y irqbalance microcode_ctl \ && rmdir /var/lib/rpm-state \ && rm -f /etc/yum.repos.d/google-cloud.repo \ && sed -i -e '/^pool /c\server metadata.google.internal iburst' /etc/chrony.conf \ && echo -e 'PermitRootLogin no\nPasswordAuthentication no\nClientAliveInterval 420' >> /etc/ssh/sshd_config \ && echo -e '[InstanceSetup]\nset_boto_config = false' > /etc/default/instance_configs.cfg \ && echo 'blacklist floppy' > /etc/modprobe.d/blacklist_floppy.conf \ && echo -e '[install]\nkargs = ["net.ifnames=0", "biosdevname=0", "scsi_mod.use_blk_mq=Y", "console=ttyS0,38400n8d", "cloud-init=disabled"]' > /usr/lib/bootc/install/05-cloud-kargs.toml Build the bootc image, in the same directory that holds the Containerfile , by running the following commands: USD GCP_BOOTC_IMAGE=quay.io/yourquayusername/bootc-nvidia-rhel9-gcp USD podman build --file Containerfile --tag USD{GCP_BOOTC_IMAGE} . Note Ensure you are running the podman build command from a RHEL enabled system. If you are not on a RHEL system, building the Containerfile file will fail. Create a config.toml file that will be used in disk image generation. [customizations.kernel] name = "gcp" append = "net.ifnames=0 biosdevname=0 scsi_mod.use_blk_mq=Y console=ttyS0,38400n8d cloud-init=disabled" Build the disk image using bootc-image-builder by running the following commands: USD mkdir -p build/store build/output USD podman run --rm -ti --privileged --pull newer \ -v /var/lib/containers/storage:/var/lib/containers/storage \ -v ./build/store:/store -v ./build/output:/output \ -v ./config.toml:/config.toml \ quay.io/centos-bootc/bootc-image-builder \ --config /config.toml \ --chown 0:0 \ --local \ --type raw \ --target-arch x86_64 \ USD{GCP_BOOTC_IMAGE} Set the name you want to use as the RHEL AI Google Cloud Platform image. USD image_name=rhel-ai-1-2 Create a tar.gz file containing the RAW file you created. USD raw_file=<path-to-raw-file> USD tar cf rhelai_gcp.tar.gz --transform "s|USDraw_file|disk.raw|" --use-compress-program=pigz "USDraw_file" Note You can use gzip instead of pigz . Upload the tar.gz file to the Google Cloud Platform Storage Container by running the following command: USD gsutil cp rhelai_gcp.tar.gz "gs://USD{gcloud_bucket}/USDimage_name.tar.gz" Create an Google Cloud Platform image from the tar.gz file you just uploaded with the following command: USD gcloud compute images create \ "USDimage_name" \ --source-uri="gs://USD{gcloud_bucket}/USDimage_name.tar.gz" \ --family "rhel-ai" \ --guest-os-features=GVNIC 5.2. Deploying your instance on Google Cloud Platform using the CLI You can launch an instance with your new RHEL AI Google Cloud Platform image from the Google Cloud Platform web console or the CLI. You can use whichever method of deployment you want to launch your instance. The following procedure displays how you can use the CLI to launch an Google Cloud Platform instance with the custom Google Cloud Platform image If you choose to use the CLI as a deployment option, there are several configurations you have to create, as shown in "Prerequisites". Prerequisites You created your RHEL AI Google Cloud Platform image. For more information, see "Converting the RHEL AI image to a Google Cloud Platform image". You installed the Google Cloud Platform CLI on your specific machine, see Install the Google Cloud Platform CLI on Linux . Procedure Log in to your Google Cloud Platform account by running the following command: USD gcloud auth login Before launching your Google Cloud Platform instance on the CLI, you need to create several configuration variables for your instance. You need to select the instance profile that you want to use for the deployment. List all the profiles in the desired region by running the following command: USD gcloud compute machine-types list --zones=<zone> Make a note of your preferred machine type, you will need it for your instance deployment. You can now start creating your Google Cloud Platform instance. Populate environment variables for when you create the instance. name=my-rhelai-instance zone=us-central1-a machine_type=a3-highgpu-8g accelerator="type=nvidia-h100-80gb,count=8" image=my-custom-rhelai-image disk_size=1024 subnet=default Configure the zone to be used. USD gcloud config set compute/zone USDzone You can now launch your instance, by running the following command: USD gcloud compute instances create \ USD{name} \ --machine-type USD{machine_type} \ --image USDimage \ --zone USDzone \ --subnet USDsubnet \ --boot-disk-size USD{disk_size} \ --boot-disk-device-name USD{name} \ --accelerator=USDaccelerator Verification To verify that your Red Hat Enterprise Linux AI tools are installed correctly, run the ilab command: USD ilab Example output USD ilab Usage: ilab [OPTIONS] COMMAND [ARGS]... CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by... model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train
[ "gcloud auth login", "gcloud auth login Your browser has been opened to visit: https://accounts.google.com/o/oauth2/auth?XXXXXXXXXXXXXXXXXXXX You are now logged in as [[email protected]]. Your current project is [your-project]. You can change this setting by running: USD gcloud config set project PROJECT_ID", "gcloud_project=your-gcloud-project gcloud config set project USDgcloud_project", "gcloud_region=us-central1", "gcloud_bucket=name-for-your-bucket gsutil mb -l USDgcloud_region gs://USDgcloud_bucket", "FROM registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.2 RUN eval USD(grep VERSION_ID /etc/os-release) && echo -e \"[google-compute-engine]\\nname=Google Compute Engine\\nbaseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-elUSD{VERSION_ID/.*}-x86_64-stable\\nenabled=1\\ngpgcheck=1\\nrepo_gpgcheck=0\\ngpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg\\n https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg\" > /etc/yum.repos.d/google-cloud.repo && dnf install -y --nobest acpid cloud-init google-compute-engine google-osconfig-agent langpacks-en rng-tools timedatex tuned vim && curl -sSo /tmp/add-google-cloud-ops-agent-repo.sh https://dl.google.com/cloudagents/add-google-cloud-ops-agent-repo.sh && bash /tmp/add-google-cloud-ops-agent-repo.sh --also-install --remove-repo && rm /tmp/add-google-cloud-ops-agent-repo.sh && mkdir -p /var/lib/rpm-state && dnf remove -y irqbalance microcode_ctl && rmdir /var/lib/rpm-state && rm -f /etc/yum.repos.d/google-cloud.repo && sed -i -e '/^pool /c\\server metadata.google.internal iburst' /etc/chrony.conf && echo -e 'PermitRootLogin no\\nPasswordAuthentication no\\nClientAliveInterval 420' >> /etc/ssh/sshd_config && echo -e '[InstanceSetup]\\nset_boto_config = false' > /etc/default/instance_configs.cfg && echo 'blacklist floppy' > /etc/modprobe.d/blacklist_floppy.conf && echo -e '[install]\\nkargs = [\"net.ifnames=0\", \"biosdevname=0\", \"scsi_mod.use_blk_mq=Y\", \"console=ttyS0,38400n8d\", \"cloud-init=disabled\"]' > /usr/lib/bootc/install/05-cloud-kargs.toml", "GCP_BOOTC_IMAGE=quay.io/yourquayusername/bootc-nvidia-rhel9-gcp podman build --file Containerfile --tag USD{GCP_BOOTC_IMAGE} .", "[customizations.kernel] name = \"gcp\" append = \"net.ifnames=0 biosdevname=0 scsi_mod.use_blk_mq=Y console=ttyS0,38400n8d cloud-init=disabled\"", "mkdir -p build/store build/output podman run --rm -ti --privileged --pull newer -v /var/lib/containers/storage:/var/lib/containers/storage -v ./build/store:/store -v ./build/output:/output -v ./config.toml:/config.toml quay.io/centos-bootc/bootc-image-builder --config /config.toml --chown 0:0 --local --type raw --target-arch x86_64 USD{GCP_BOOTC_IMAGE}", "image_name=rhel-ai-1-2", "raw_file=<path-to-raw-file> tar cf rhelai_gcp.tar.gz --transform \"s|USDraw_file|disk.raw|\" --use-compress-program=pigz \"USDraw_file\"", "gsutil cp rhelai_gcp.tar.gz \"gs://USD{gcloud_bucket}/USDimage_name.tar.gz\"", "gcloud compute images create \"USDimage_name\" --source-uri=\"gs://USD{gcloud_bucket}/USDimage_name.tar.gz\" --family \"rhel-ai\" --guest-os-features=GVNIC", "gcloud auth login", "gcloud compute machine-types list --zones=<zone>", "name=my-rhelai-instance zone=us-central1-a machine_type=a3-highgpu-8g accelerator=\"type=nvidia-h100-80gb,count=8\" image=my-custom-rhelai-image disk_size=1024 subnet=default", "gcloud config set compute/zone USDzone", "gcloud compute instances create USD{name} --machine-type USD{machine_type} --image USDimage --zone USDzone --subnet USDsubnet --boot-disk-size USD{disk_size} --boot-disk-device-name USD{name} --accelerator=USDaccelerator", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html/installing/installing_gcp
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/performance_tuning_for_red_hat_jboss_enterprise_application_platform/proc_providing-feedback-on-red-hat-documentation_performance-tuning-guide
Chapter 10. User-provisioned infrastructure
Chapter 10. User-provisioned infrastructure 10.1. Adding compute machines to clusters with user-provisioned infrastructure You can add compute machines to a cluster on user-provisioned infrastructure either as part of the installation process or after installation. The post-installation process requires some of the same configuration files and parameters that were used during installation. 10.1.1. Adding compute machines to Amazon Web Services To add more compute machines to your OpenShift Container Platform cluster on Amazon Web Services (AWS), see Adding compute machines to AWS by using CloudFormation templates . 10.1.2. Adding compute machines to Microsoft Azure To add more compute machines to your OpenShift Container Platform cluster on Microsoft Azure, see Creating additional worker machines in Azure . 10.1.3. Adding compute machines to Google Cloud Platform To add more compute machines to your OpenShift Container Platform cluster on Google Cloud Platform (GCP), see Creating additional worker machines in GCP . 10.1.4. Adding compute machines to vSphere To add more compute machines to your OpenShift Container Platform cluster on vSphere, see Adding compute machines to vSphere . 10.1.5. Adding compute machines to bare metal To add more compute machines to your OpenShift Container Platform cluster on bare metal, see Adding compute machines to bare metal . 10.2. Adding compute machines to AWS by using CloudFormation templates You can add more compute machines to your OpenShift Container Platform cluster on Amazon Web Services (AWS) that you created by using the sample CloudFormation templates. 10.2.1. Prerequisites You installed your cluster on AWS by using the provided AWS CloudFormation templates . You have the JSON file and CloudFormation template that you used to create the compute machines during cluster installation. If you do not have these files, you must recreate them by following the instructions in the installation procedure . 10.2.2. Adding more compute machines to your AWS cluster by using CloudFormation templates You can add more compute machines to your OpenShift Container Platform cluster on Amazon Web Services (AWS) that you created by using the sample CloudFormation templates. Important The CloudFormation template creates a stack that represents one compute machine. You must create a stack for each compute machine. Note If you do not use the provided CloudFormation template to create your compute nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You installed an OpenShift Container Platform cluster by using CloudFormation templates and have access to the JSON file and CloudFormation template that you used to create the compute machines during cluster installation. You installed the AWS CLI. Procedure Create another compute stack. Launch the template: USD aws cloudformation create-stack --stack-name <name> \ 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-workers . You must provide the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create compute stacks until you have created enough compute machines for your cluster. 10.2.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.3. Adding compute machines to vSphere You can add more compute machines to your OpenShift Container Platform cluster on VMware vSphere. 10.3.1. Prerequisites You installed a cluster on vSphere . You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure . Important If you do not have access to the Red Hat Enterprise Linux CoreOS (RHCOS) images that were used to create your cluster, you can add more compute machines to your OpenShift Container Platform cluster with newer versions of Red Hat Enterprise Linux CoreOS (RHCOS) images. For instructions, see Adding new nodes to UPI cluster fails after upgrading to OpenShift 4.6+ . 10.3.2. Creating more Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere You can create more compute machines for your cluster that uses user-provisioned infrastructure on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . From the Latency Sensitivity list, select High . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 10.3.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 10.4. Adding compute machines to bare metal You can add more compute machines to your OpenShift Container Platform cluster on bare metal. 10.4.1. Prerequisites You installed a cluster on bare metal . You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure . Important If you do not have access to the Red Hat Enterprise Linux CoreOS (RHCOS) images that were used to create your cluster, you can add more compute machines to your OpenShift Container Platform cluster with newer versions of Red Hat Enterprise Linux CoreOS (RHCOS) images. For instructions, see Adding new nodes to UPI cluster fails after upgrading to OpenShift 4.6+ . 10.4.2. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines Before you add more compute machines to a cluster that you installed on bare metal infrastructure, you must create RHCOS machines for it to use. You can either use an ISO image or network PXE booting to create the machines. Note You must use the same ISO image that you used to install a cluster to deploy all new nodes in a cluster. It is recommended to use the same Ignition config file. The nodes automatically upgrade themselves on the first boot before running the workloads. You can add the nodes before or after the upgrade. 10.4.2.1. Creating more RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Procedure Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. After the instance boots, press the TAB or E key to edit the kernel command line. Add the parameters to the kernel command line: coreos.inst.install_dev=sda 1 coreos.inst.ignition_url=http://example.com/worker.ign 2 1 Specify the block device of the system to install to. 2 Specify the URL of the compute Ignition config file. Only HTTP and HTTPS protocols are supported. Press Enter to complete the installation. After RHCOS installs, the system reboots. After the system reboots, it applies the Ignition config file that you specified. Continue to create more compute machines for your cluster. 10.4.2.2. Creating more RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE: 1 Specify locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. 2 Specify the location of the initramfs file that you uploaded to your HTTP server. This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 10.4.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests .
[ "aws cloudformation create-stack --stack-name <name> \\ 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "aws cloudformation describe-stacks --stack-name <name>", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0", "coreos.inst.install_dev=sda 1 coreos.inst.ignition_url=http://example.com/worker.ign 2", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 1 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 2", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/machine_management/user-provisioned-infrastructure-2
Chapter 52. RoleService
Chapter 52. RoleService 52.1. ComputeEffectiveAccessScope POST /v1/computeeffectiveaccessscope ComputeEffectiveAccessScope 52.1.1. Description Returns effective access scope based on the rules in the request. Does not persist anything; not idempotent due to possible changes to clusters and namespaces. POST is chosen due to potentially large payload. There are advantages in both keeping the response slim and detailed. If only IDs of selected clusters and namespaces are included, response latency and processing time are lower but the caller shall overlay the response with its view of the world which is susceptible to consistency issues. Listing all clusters and namespaces with related metadata is convenient for the caller but bloat the message with secondary data. We let the caller decide what level of detail they would like to have: - Minimal, when only roots of included subtrees are listed by their IDs. Clusters can be either INCLUDED (its namespaces are included but are not listed) or PARTIAL (at least one namespace is explicitly included). Namespaces can only be INCLUDED. - Standard [default], when all known clusters and namespaces are listed with their IDs and names. Clusters can be INCLUDED (all its namespaces are explicitly listed as INCLUDED), PARTIAL (all its namespaces are explicitly listed, some as INCLUDED and some as EXCLUDED), and EXCLUDED (all its namespaces are explicitly listed as EXCLUDED). Namespaces can be either INCLUDED or EXCLUDED. - High, when every cluster and namespace is augmented with metadata. 52.1.2. Parameters 52.1.2.1. Body Parameter Name Description Required Default Pattern body ComputeEffectiveAccessScopeRequestPayload X 52.1.2.2. Query Parameters Name Description Required Default Pattern detail - STANDARD 52.1.3. Return Type StorageEffectiveAccessScope 52.1.4. Content Type application/json 52.1.5. Responses Table 52.1. HTTP Response Codes Code Message Datatype 200 A successful response. StorageEffectiveAccessScope 0 An unexpected error response. RuntimeError 52.1.6. Samples 52.1.7. Common object reference 52.1.7.1. ComputeEffectiveAccessScopeRequestPayload Field Name Required Nullable Type Description Format simpleRules SimpleAccessScopeRules 52.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.1.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.1.7.4. SimpleAccessScopeRules Each element of any repeated field is an individual rule. Rules are joined by logical OR: if there exists a rule allowing resource x , x is in the access scope. Field Name Required Nullable Type Description Format includedClusters List of string includedNamespaces List of SimpleAccessScopeRulesNamespace clusterLabelSelectors List of StorageSetBasedLabelSelector namespaceLabelSelectors List of StorageSetBasedLabelSelector 52.1.7.5. SimpleAccessScopeRulesNamespace Field Name Required Nullable Type Description Format clusterName String Both fields must be set. namespaceName String 52.1.7.6. StorageEffectiveAccessScope EffectiveAccessScope describes which clusters and namespaces are "in scope" given current state. Basically, if AccessScope is applied to the currently known clusters and namespaces, the result is EffectiveAccessScope. EffectiveAccessScope represents a tree with nodes marked as included and excluded. If a node is included, all its child nodes are included. Field Name Required Nullable Type Description Format clusters List of StorageEffectiveAccessScopeCluster 52.1.7.7. StorageEffectiveAccessScopeCluster Field Name Required Nullable Type Description Format id String name String state StorageEffectiveAccessScopeState UNKNOWN, INCLUDED, EXCLUDED, PARTIAL, labels Map of string namespaces List of StorageEffectiveAccessScopeNamespace 52.1.7.8. StorageEffectiveAccessScopeNamespace Field Name Required Nullable Type Description Format id String name String state StorageEffectiveAccessScopeState UNKNOWN, INCLUDED, EXCLUDED, PARTIAL, labels Map of string 52.1.7.9. StorageEffectiveAccessScopeState Enum Values UNKNOWN INCLUDED EXCLUDED PARTIAL 52.1.7.10. StorageSetBasedLabelSelector SetBasedLabelSelector only allows set-based label requirements. available tag: 3 Field Name Required Nullable Type Description Format requirements List of StorageSetBasedLabelSelectorRequirement 52.1.7.11. StorageSetBasedLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 52.1.7.12. StorageSetBasedLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageSetBasedLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 52.2. GetMyPermissions GET /v1/mypermissions 52.2.1. Description 52.2.2. Parameters 52.2.3. Return Type V1GetPermissionsResponse 52.2.4. Content Type application/json 52.2.5. Responses Table 52.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetPermissionsResponse 0 An unexpected error response. RuntimeError 52.2.6. Samples 52.2.7. Common object reference 52.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.2.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.2.7.3. StorageAccess Enum Values NO_ACCESS READ_ACCESS READ_WRITE_ACCESS 52.2.7.4. V1GetPermissionsResponse GetPermissionsResponse is wire-compatible with the old format of the Role message and represents a collection of aggregated permissions. Field Name Required Nullable Type Description Format resourceToAccess Map of StorageAccess 52.3. ListPermissionSets GET /v1/permissionsets 52.3.1. Description 52.3.2. Parameters 52.3.3. Return Type V1ListPermissionSetsResponse 52.3.4. Content Type application/json 52.3.5. Responses Table 52.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListPermissionSetsResponse 0 An unexpected error response. RuntimeError 52.3.6. Samples 52.3.7. Common object reference 52.3.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.3.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.3.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.3.7.3. StorageAccess Enum Values NO_ACCESS READ_ACCESS READ_WRITE_ACCESS 52.3.7.4. StoragePermissionSet This encodes a set of permissions for StackRox resources. Field Name Required Nullable Type Description Format id String id is generated and cannot be changed. name String name and description are provided by the user and can be changed. description String resourceToAccess Map of StorageAccess traits StorageTraits 52.3.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 52.3.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 52.3.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 52.3.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 52.3.7.9. V1ListPermissionSetsResponse Field Name Required Nullable Type Description Format permissionSets List of StoragePermissionSet 52.4. DeletePermissionSet DELETE /v1/permissionsets/{id} 52.4.1. Description 52.4.2. Parameters 52.4.2.1. Path Parameters Name Description Required Default Pattern id X null 52.4.3. Return Type Object 52.4.4. Content Type application/json 52.4.5. Responses Table 52.4. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 52.4.6. Samples 52.4.7. Common object reference 52.4.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.4.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.4.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.5. GetPermissionSet GET /v1/permissionsets/{id} 52.5.1. Description 52.5.2. Parameters 52.5.2.1. Path Parameters Name Description Required Default Pattern id X null 52.5.3. Return Type StoragePermissionSet 52.5.4. Content Type application/json 52.5.5. Responses Table 52.5. HTTP Response Codes Code Message Datatype 200 A successful response. StoragePermissionSet 0 An unexpected error response. RuntimeError 52.5.6. Samples 52.5.7. Common object reference 52.5.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.5.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.5.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.5.7.3. StorageAccess Enum Values NO_ACCESS READ_ACCESS READ_WRITE_ACCESS 52.5.7.4. StoragePermissionSet This encodes a set of permissions for StackRox resources. Field Name Required Nullable Type Description Format id String id is generated and cannot be changed. name String name and description are provided by the user and can be changed. description String resourceToAccess Map of StorageAccess traits StorageTraits 52.5.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 52.5.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 52.5.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 52.5.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 52.6. PutPermissionSet PUT /v1/permissionsets/{id} 52.6.1. Description 52.6.2. Parameters 52.6.2.1. Path Parameters Name Description Required Default Pattern id id is generated and cannot be changed. X null 52.6.2.2. Body Parameter Name Description Required Default Pattern body StoragePermissionSet X 52.6.3. Return Type Object 52.6.4. Content Type application/json 52.6.5. Responses Table 52.6. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 52.6.6. Samples 52.6.7. Common object reference 52.6.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.6.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.6.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.6.7.3. StorageAccess Enum Values NO_ACCESS READ_ACCESS READ_WRITE_ACCESS 52.6.7.4. StoragePermissionSet This encodes a set of permissions for StackRox resources. Field Name Required Nullable Type Description Format id String id is generated and cannot be changed. name String name and description are provided by the user and can be changed. description String resourceToAccess Map of StorageAccess traits StorageTraits 52.6.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 52.6.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 52.6.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 52.6.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 52.7. PostPermissionSet POST /v1/permissionsets PostPermissionSet 52.7.1. Description PermissionSet.id is disallowed in request and set in response. 52.7.2. Parameters 52.7.2.1. Body Parameter Name Description Required Default Pattern body StoragePermissionSet X 52.7.3. Return Type StoragePermissionSet 52.7.4. Content Type application/json 52.7.5. Responses Table 52.7. HTTP Response Codes Code Message Datatype 200 A successful response. StoragePermissionSet 0 An unexpected error response. RuntimeError 52.7.6. Samples 52.7.7. Common object reference 52.7.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.7.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.7.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.7.7.3. StorageAccess Enum Values NO_ACCESS READ_ACCESS READ_WRITE_ACCESS 52.7.7.4. StoragePermissionSet This encodes a set of permissions for StackRox resources. Field Name Required Nullable Type Description Format id String id is generated and cannot be changed. name String name and description are provided by the user and can be changed. description String resourceToAccess Map of StorageAccess traits StorageTraits 52.7.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 52.7.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 52.7.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 52.7.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 52.8. GetResources GET /v1/resources 52.8.1. Description 52.8.2. Parameters 52.8.3. Return Type V1GetResourcesResponse 52.8.4. Content Type application/json 52.8.5. Responses Table 52.8. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetResourcesResponse 0 An unexpected error response. RuntimeError 52.8.6. Samples 52.8.7. Common object reference 52.8.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.8.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.8.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.8.7.3. V1GetResourcesResponse Field Name Required Nullable Type Description Format resources List of string 52.9. GetRoles GET /v1/roles 52.9.1. Description 52.9.2. Parameters 52.9.3. Return Type V1GetRolesResponse 52.9.4. Content Type application/json 52.9.5. Responses Table 52.9. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetRolesResponse 0 An unexpected error response. RuntimeError 52.9.6. Samples 52.9.7. Common object reference 52.9.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.9.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.9.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.9.7.3. StorageAccess Enum Values NO_ACCESS READ_ACCESS READ_WRITE_ACCESS 52.9.7.4. StorageRole A role specifies which actions are allowed for which subset of cluster objects. Permissions be can either specified directly via setting resource_to_access together with global_access or by referencing a permission set by its id in permission_set_name. Field Name Required Nullable Type Description Format name String name and description are provided by the user and can be changed. description String permissionSetId String The associated PermissionSet and AccessScope for this Role. accessScopeId String globalAccess StorageAccess NO_ACCESS, READ_ACCESS, READ_WRITE_ACCESS, resourceToAccess Map of StorageAccess Deprecated 2021-04-20 in favor of permission_set_id . traits StorageTraits 52.9.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 52.9.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 52.9.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 52.9.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 52.9.7.9. V1GetRolesResponse Field Name Required Nullable Type Description Format roles List of StorageRole 52.10. DeleteRole DELETE /v1/roles/{id} 52.10.1. Description 52.10.2. Parameters 52.10.2.1. Path Parameters Name Description Required Default Pattern id X null 52.10.3. Return Type Object 52.10.4. Content Type application/json 52.10.5. Responses Table 52.10. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 52.10.6. Samples 52.10.7. Common object reference 52.10.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.10.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.10.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.11. GetRole GET /v1/roles/{id} 52.11.1. Description 52.11.2. Parameters 52.11.2.1. Path Parameters Name Description Required Default Pattern id X null 52.11.3. Return Type StorageRole 52.11.4. Content Type application/json 52.11.5. Responses Table 52.11. HTTP Response Codes Code Message Datatype 200 A successful response. StorageRole 0 An unexpected error response. RuntimeError 52.11.6. Samples 52.11.7. Common object reference 52.11.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.11.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.11.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.11.7.3. StorageAccess Enum Values NO_ACCESS READ_ACCESS READ_WRITE_ACCESS 52.11.7.4. StorageRole A role specifies which actions are allowed for which subset of cluster objects. Permissions be can either specified directly via setting resource_to_access together with global_access or by referencing a permission set by its id in permission_set_name. Field Name Required Nullable Type Description Format name String name and description are provided by the user and can be changed. description String permissionSetId String The associated PermissionSet and AccessScope for this Role. accessScopeId String globalAccess StorageAccess NO_ACCESS, READ_ACCESS, READ_WRITE_ACCESS, resourceToAccess Map of StorageAccess Deprecated 2021-04-20 in favor of permission_set_id . traits StorageTraits 52.11.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 52.11.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 52.11.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 52.11.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 52.12. CreateRole POST /v1/roles/{name} 52.12.1. Description 52.12.2. Parameters 52.12.2.1. Path Parameters Name Description Required Default Pattern name X null 52.12.2.2. Body Parameter Name Description Required Default Pattern body StorageRole X 52.12.3. Return Type Object 52.12.4. Content Type application/json 52.12.5. Responses Table 52.12. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 52.12.6. Samples 52.12.7. Common object reference 52.12.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.12.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.12.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.12.7.3. StorageAccess Enum Values NO_ACCESS READ_ACCESS READ_WRITE_ACCESS 52.12.7.4. StorageRole A role specifies which actions are allowed for which subset of cluster objects. Permissions be can either specified directly via setting resource_to_access together with global_access or by referencing a permission set by its id in permission_set_name. Field Name Required Nullable Type Description Format name String name and description are provided by the user and can be changed. description String permissionSetId String The associated PermissionSet and AccessScope for this Role. accessScopeId String globalAccess StorageAccess NO_ACCESS, READ_ACCESS, READ_WRITE_ACCESS, resourceToAccess Map of StorageAccess Deprecated 2021-04-20 in favor of permission_set_id . traits StorageTraits 52.12.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 52.12.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 52.12.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 52.12.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 52.13. UpdateRole PUT /v1/roles/{name} 52.13.1. Description 52.13.2. Parameters 52.13.2.1. Path Parameters Name Description Required Default Pattern name `name` and `description` are provided by the user and can be changed. X null 52.13.2.2. Body Parameter Name Description Required Default Pattern body StorageRole X 52.13.3. Return Type Object 52.13.4. Content Type application/json 52.13.5. Responses Table 52.13. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 52.13.6. Samples 52.13.7. Common object reference 52.13.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.13.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.13.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.13.7.3. StorageAccess Enum Values NO_ACCESS READ_ACCESS READ_WRITE_ACCESS 52.13.7.4. StorageRole A role specifies which actions are allowed for which subset of cluster objects. Permissions be can either specified directly via setting resource_to_access together with global_access or by referencing a permission set by its id in permission_set_name. Field Name Required Nullable Type Description Format name String name and description are provided by the user and can be changed. description String permissionSetId String The associated PermissionSet and AccessScope for this Role. accessScopeId String globalAccess StorageAccess NO_ACCESS, READ_ACCESS, READ_WRITE_ACCESS, resourceToAccess Map of StorageAccess Deprecated 2021-04-20 in favor of permission_set_id . traits StorageTraits 52.13.7.5. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 52.13.7.6. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 52.13.7.7. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 52.13.7.8. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 52.14. GetNamespacesForClusterAndPermissions GET /v1/sac/clusters/{clusterId}/namespaces GetNamespacesForClusterAndPermissions 52.14.1. Description Returns the list of namespace ID and namespace name pairs that belong to the requested cluster and for which the user has at least read access granted for the list of requested permissions that have namespace scope or narrower (i.e. global and cluster permissions from the input are ignored). If the input only contains permissions at global or cluster level, the output will be an empty list. If no permission is given in input, all namespaces allowed by the requester scope for any permission with namespace scope or narrower will be part of the response. 52.14.2. Parameters 52.14.2.1. Path Parameters Name Description Required Default Pattern clusterId X null 52.14.2.2. Query Parameters Name Description Required Default Pattern permissions String - null 52.14.3. Return Type V1GetNamespacesForClusterAndPermissionsResponse 52.14.4. Content Type application/json 52.14.5. Responses Table 52.14. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetNamespacesForClusterAndPermissionsResponse 0 An unexpected error response. RuntimeError 52.14.6. Samples 52.14.7. Common object reference 52.14.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.14.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.14.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.14.7.3. V1GetNamespacesForClusterAndPermissionsResponse Field Name Required Nullable Type Description Format namespaces List of V1ScopeObject 52.14.7.4. V1ScopeObject ScopeObject represents an ID, name pair, which can apply to any entity that takes part in an access scope (so far Cluster and Namespace). Field Name Required Nullable Type Description Format id String name String 52.15. GetClustersForPermissions GET /v1/sac/clusters GetClustersForPermissions 52.15.1. Description Returns the list of cluster ID and cluster name pairs that have at least read allowed by the scope of the requesting user for the list of requested permissions. Effective access scopes are only considered for input permissions that have cluster scope or narrower (i.e. global permissions from the input are ignored). If the input only contains permissions at global level, the output will be an empty list. If no permission is given in input, all clusters allowed by the requester scope for any permission with cluster scope or narrower will be part of the response. 52.15.2. Parameters 52.15.2.1. Query Parameters Name Description Required Default Pattern pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null permissions String - null 52.15.3. Return Type V1GetClustersForPermissionsResponse 52.15.4. Content Type application/json 52.15.5. Responses Table 52.15. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetClustersForPermissionsResponse 0 An unexpected error response. RuntimeError 52.15.6. Samples 52.15.7. Common object reference 52.15.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.15.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.15.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.15.7.3. V1GetClustersForPermissionsResponse Field Name Required Nullable Type Description Format clusters List of V1ScopeObject 52.15.7.4. V1ScopeObject ScopeObject represents an ID, name pair, which can apply to any entity that takes part in an access scope (so far Cluster and Namespace). Field Name Required Nullable Type Description Format id String name String 52.16. ListSimpleAccessScopes GET /v1/simpleaccessscopes 52.16.1. Description 52.16.2. Parameters 52.16.3. Return Type V1ListSimpleAccessScopesResponse 52.16.4. Content Type application/json 52.16.5. Responses Table 52.16. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListSimpleAccessScopesResponse 0 An unexpected error response. RuntimeError 52.16.6. Samples 52.16.7. Common object reference 52.16.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.16.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.16.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.16.7.3. SimpleAccessScopeRules Each element of any repeated field is an individual rule. Rules are joined by logical OR: if there exists a rule allowing resource x , x is in the access scope. Field Name Required Nullable Type Description Format includedClusters List of string includedNamespaces List of SimpleAccessScopeRulesNamespace clusterLabelSelectors List of StorageSetBasedLabelSelector namespaceLabelSelectors List of StorageSetBasedLabelSelector 52.16.7.4. SimpleAccessScopeRulesNamespace Field Name Required Nullable Type Description Format clusterName String Both fields must be set. namespaceName String 52.16.7.5. StorageSetBasedLabelSelector SetBasedLabelSelector only allows set-based label requirements. available tag: 3 Field Name Required Nullable Type Description Format requirements List of StorageSetBasedLabelSelectorRequirement 52.16.7.6. StorageSetBasedLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 52.16.7.7. StorageSetBasedLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageSetBasedLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 52.16.7.8. StorageSimpleAccessScope Simple access scope is a (simple) selection criteria for scoped resources. It does not allow multi-component AND-rules nor set operations on names. Field Name Required Nullable Type Description Format id String id is generated and cannot be changed. name String name and description are provided by the user and can be changed. description String rules SimpleAccessScopeRules traits StorageTraits 52.16.7.9. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 52.16.7.10. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 52.16.7.11. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 52.16.7.12. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 52.16.7.13. V1ListSimpleAccessScopesResponse Field Name Required Nullable Type Description Format accessScopes List of StorageSimpleAccessScope 52.17. DeleteSimpleAccessScope DELETE /v1/simpleaccessscopes/{id} 52.17.1. Description 52.17.2. Parameters 52.17.2.1. Path Parameters Name Description Required Default Pattern id X null 52.17.3. Return Type Object 52.17.4. Content Type application/json 52.17.5. Responses Table 52.17. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 52.17.6. Samples 52.17.7. Common object reference 52.17.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.17.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.17.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.18. GetSimpleAccessScope GET /v1/simpleaccessscopes/{id} 52.18.1. Description 52.18.2. Parameters 52.18.2.1. Path Parameters Name Description Required Default Pattern id X null 52.18.3. Return Type StorageSimpleAccessScope 52.18.4. Content Type application/json 52.18.5. Responses Table 52.18. HTTP Response Codes Code Message Datatype 200 A successful response. StorageSimpleAccessScope 0 An unexpected error response. RuntimeError 52.18.6. Samples 52.18.7. Common object reference 52.18.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.18.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.18.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.18.7.3. SimpleAccessScopeRules Each element of any repeated field is an individual rule. Rules are joined by logical OR: if there exists a rule allowing resource x , x is in the access scope. Field Name Required Nullable Type Description Format includedClusters List of string includedNamespaces List of SimpleAccessScopeRulesNamespace clusterLabelSelectors List of StorageSetBasedLabelSelector namespaceLabelSelectors List of StorageSetBasedLabelSelector 52.18.7.4. SimpleAccessScopeRulesNamespace Field Name Required Nullable Type Description Format clusterName String Both fields must be set. namespaceName String 52.18.7.5. StorageSetBasedLabelSelector SetBasedLabelSelector only allows set-based label requirements. available tag: 3 Field Name Required Nullable Type Description Format requirements List of StorageSetBasedLabelSelectorRequirement 52.18.7.6. StorageSetBasedLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 52.18.7.7. StorageSetBasedLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageSetBasedLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 52.18.7.8. StorageSimpleAccessScope Simple access scope is a (simple) selection criteria for scoped resources. It does not allow multi-component AND-rules nor set operations on names. Field Name Required Nullable Type Description Format id String id is generated and cannot be changed. name String name and description are provided by the user and can be changed. description String rules SimpleAccessScopeRules traits StorageTraits 52.18.7.9. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 52.18.7.10. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 52.18.7.11. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 52.18.7.12. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 52.19. PutSimpleAccessScope PUT /v1/simpleaccessscopes/{id} 52.19.1. Description 52.19.2. Parameters 52.19.2.1. Path Parameters Name Description Required Default Pattern id `id` is generated and cannot be changed. X null 52.19.2.2. Body Parameter Name Description Required Default Pattern body StorageSimpleAccessScope X 52.19.3. Return Type Object 52.19.4. Content Type application/json 52.19.5. Responses Table 52.19. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 52.19.6. Samples 52.19.7. Common object reference 52.19.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.19.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.19.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.19.7.3. SimpleAccessScopeRules Each element of any repeated field is an individual rule. Rules are joined by logical OR: if there exists a rule allowing resource x , x is in the access scope. Field Name Required Nullable Type Description Format includedClusters List of string includedNamespaces List of SimpleAccessScopeRulesNamespace clusterLabelSelectors List of StorageSetBasedLabelSelector namespaceLabelSelectors List of StorageSetBasedLabelSelector 52.19.7.4. SimpleAccessScopeRulesNamespace Field Name Required Nullable Type Description Format clusterName String Both fields must be set. namespaceName String 52.19.7.5. StorageSetBasedLabelSelector SetBasedLabelSelector only allows set-based label requirements. available tag: 3 Field Name Required Nullable Type Description Format requirements List of StorageSetBasedLabelSelectorRequirement 52.19.7.6. StorageSetBasedLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 52.19.7.7. StorageSetBasedLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageSetBasedLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 52.19.7.8. StorageSimpleAccessScope Simple access scope is a (simple) selection criteria for scoped resources. It does not allow multi-component AND-rules nor set operations on names. Field Name Required Nullable Type Description Format id String id is generated and cannot be changed. name String name and description are provided by the user and can be changed. description String rules SimpleAccessScopeRules traits StorageTraits 52.19.7.9. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 52.19.7.10. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 52.19.7.11. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 52.19.7.12. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN 52.20. PostSimpleAccessScope POST /v1/simpleaccessscopes PostSimpleAccessScope 52.20.1. Description SimpleAccessScope.id is disallowed in request and set in response. 52.20.2. Parameters 52.20.2.1. Body Parameter Name Description Required Default Pattern body StorageSimpleAccessScope X 52.20.3. Return Type StorageSimpleAccessScope 52.20.4. Content Type application/json 52.20.5. Responses Table 52.20. HTTP Response Codes Code Message Datatype 200 A successful response. StorageSimpleAccessScope 0 An unexpected error response. RuntimeError 52.20.6. Samples 52.20.7. Common object reference 52.20.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 52.20.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 52.20.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 52.20.7.3. SimpleAccessScopeRules Each element of any repeated field is an individual rule. Rules are joined by logical OR: if there exists a rule allowing resource x , x is in the access scope. Field Name Required Nullable Type Description Format includedClusters List of string includedNamespaces List of SimpleAccessScopeRulesNamespace clusterLabelSelectors List of StorageSetBasedLabelSelector namespaceLabelSelectors List of StorageSetBasedLabelSelector 52.20.7.4. SimpleAccessScopeRulesNamespace Field Name Required Nullable Type Description Format clusterName String Both fields must be set. namespaceName String 52.20.7.5. StorageSetBasedLabelSelector SetBasedLabelSelector only allows set-based label requirements. available tag: 3 Field Name Required Nullable Type Description Format requirements List of StorageSetBasedLabelSelectorRequirement 52.20.7.6. StorageSetBasedLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 52.20.7.7. StorageSetBasedLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageSetBasedLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 52.20.7.8. StorageSimpleAccessScope Simple access scope is a (simple) selection criteria for scoped resources. It does not allow multi-component AND-rules nor set operations on names. Field Name Required Nullable Type Description Format id String id is generated and cannot be changed. name String name and description are provided by the user and can be changed. description String rules SimpleAccessScopeRules traits StorageTraits 52.20.7.9. StorageTraits Field Name Required Nullable Type Description Format mutabilityMode TraitsMutabilityMode ALLOW_MUTATE, ALLOW_MUTATE_FORCED, visibility TraitsVisibility VISIBLE, HIDDEN, origin TraitsOrigin IMPERATIVE, DEFAULT, DECLARATIVE, DECLARATIVE_ORPHANED, 52.20.7.10. TraitsMutabilityMode EXPERIMENTAL. NOTE: Please refer from using MutabilityMode for the time being. It will be replaced in the future (ROX-14276). MutabilityMode specifies whether and how an object can be modified. Default is ALLOW_MUTATE and means there are no modification restrictions; this is equivalent to the absence of MutabilityMode specification. ALLOW_MUTATE_FORCED forbids all modifying operations except object removal with force bit on. Be careful when changing the state of this field. For example, modifying an object from ALLOW_MUTATE to ALLOW_MUTATE_FORCED is allowed but will prohibit any further changes to it, including modifying it back to ALLOW_MUTATE. Enum Values ALLOW_MUTATE ALLOW_MUTATE_FORCED 52.20.7.11. TraitsOrigin Origin specifies the origin of an object. Objects can have four different origins: - IMPERATIVE: the object was created via the API. This is assumed by default. - DEFAULT: the object is a default object, such as default roles, access scopes etc. - DECLARATIVE: the object is created via declarative configuration. - DECLARATIVE_ORPHANED: the object is created via declarative configuration and then unsuccessfully deleted(for example, because it is referenced by another object) Based on the origin, different rules apply to the objects. Objects with the DECLARATIVE origin are not allowed to be modified via API, only via declarative configuration. Additionally, they may not reference objects with the IMPERATIVE origin. Objects with the DEFAULT origin are not allowed to be modified via either API or declarative configuration. They may be referenced by all other objects. Objects with the IMPERATIVE origin are allowed to be modified via API, not via declarative configuration. They may reference all other objects. Objects with the DECLARATIVE_ORPHANED origin are not allowed to be modified via either API or declarative configuration. DECLARATIVE_ORPHANED resource can become DECLARATIVE again if it is redefined in declarative configuration. Objects with this origin will be cleaned up from the system immediately after they are not referenced by other resources anymore. They may be referenced by all other objects. Enum Values IMPERATIVE DEFAULT DECLARATIVE DECLARATIVE_ORPHANED 52.20.7.12. TraitsVisibility EXPERIMENTAL. visibility allows to specify whether the object should be visible for certain APIs. Enum Values VISIBLE HIDDEN
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next available tag: 4", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next available tag: 4", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next available tag: 4", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next available tag: 4", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next available tag: 4" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/roleservice
Chapter 15. Troubleshooting
Chapter 15. Troubleshooting There are cases where the Assisted Installer cannot begin the installation or the cluster fails to install properly. In these events, it is helpful to understand the likely failure modes as well as how to troubleshoot the failure. 15.1. Troubleshooting discovery ISO issues The Assisted Installer uses an ISO image to run an agent that registers the host to the cluster and performs hardware and network validations before attempting to install OpenShift. You can follow these procedures to troubleshoot problems related to the host discovery. Once you start the host with the discovery ISO image, the Assisted Installer discovers the host and presents it in the Assisted Service web console. See Configuring the discovery image for additional details. 15.1.1. Verify the discovery agent is running Prerequisites You have created an infrastructure environment by using the API or have created a cluster by using the web console. You booted a host with the Infrastructure Environment discovery ISO and the host failed to register. You have SSH access to the host. You provided an SSH public key in the "Add hosts" dialog before generating the Discovery ISO so that you can SSH into your machine without a password. Procedure Verify that your host machine is powered on. If you selected DHCP networking , check that the DHCP server is enabled. If you selected Static IP, bridges and bonds networking, check that your configurations are correct. Verify that you can access your host machine using SSH, a console such as the BMC, or a virtual machine console: USD ssh core@<host_ip_address> You can specify private key file by using the -i parameter if it is not stored in the default directory. USD ssh -i <ssh_private_key_file> core@<host_ip_address> If you fail to connect over SSH to the host, the host failed during boot or it failed to configure the network. Upon login you should see this message: Example login If you are not seeing this message it means that the host did not boot with the Assisted Installer ISO image. Make sure you configured the boot order properly (The host should boot once from the live-ISO). Check the agent service logs: USD sudo journalctl -u agent.service In the following example, the errors indicate there is a network issue: Example agent service log screenshot of agent service log If there is an error pulling the agent image, check the proxy settings. Verify that the host is connected to the network. You can use nmcli to get additional information about your network configuration. 15.1.2. Verify the agent can access the assisted-service Prerequisites You have created an Infrastructure Environment by using the API or have created a cluster by using the web console. You booted a host with the Infrastructure Environment discovery ISO and the host failed to register. You verified the discovery agent is running. Procedure Check the agent logs to verify the agent can access the Assisted Service: USD sudo journalctl TAG=agent The errors in the following example indicate that the agent failed to access the Assisted Service. Example agent log Check the proxy settings you configured for the cluster. If configured, the proxy must allow access to the Assisted Service URL. 15.2. Troubleshooting minimal discovery ISO issues Use the minimal ISO image when the virtual media connection has limited bandwidth. It includes only what the agent requires to boot a host with networking. The majority of the content is downloaded upon boot. The resulting ISO image is about 100MB in size compared to 1GB for the full ISO image. 15.2.1. Troubleshooting minimal ISO boot failure by interrupting the boot process If your environment requires static network configuration to access the Assisted Installer service, any issues with that configuration might prevent the minimal ISO from booting properly. If the boot screen shows that the host has failed to download the root file system image, the network might not be configured correctly. You can interrupt the kernel boot early in the bootstrap process, before the root file system image is downloaded. This allows you to access the root console and review the network configurations. Example rootfs download failure Procedure Add the .spec.kernelArguments stanza to the infraEnv object of the cluster you are deploying: Note For details on modifying an infrastructure environment, see Additional Resources . # ... spec: clusterRef: name: sno1 namespace: sno1 cpuArchitecture: x86_64 ipxeScriptType: DiscoveryImageAlways kernelArguments: - operation: append value: rd.break=initqueue 1 nmStateConfigLabelSelector: matchLabels: nmstate-label: sno1 pullSecretRef: name: assisted-deployment-pull-secret 1 rd.break=initqueue interrupts the boot at the dracut main loop. See rd.break options for debugging kernel boot for details. Wait for the related nodes to reboot automatically and for the boot to stop at the iniqueue stage, before rootfs is downloaded. You will be redirected to the root console. Identify and change the incorrect network configurations. Here are some useful diagnostic commands: View system logs by using journalctl , for example: # journalctl -p err //Sorts logs by errors # journalctl -p crit //Sorts logs by critical errors # journalctl -p warning //Sorts logs by warnings View network connection information by using nmcli , as follows: # nmcli conn show Check the configuration files for incorrect network connections, for example: # cat /etc/assisted/network/host0/eno3.nmconnection Press control+d to resume the bootstrap process. The server downloads rootfs and completes the process. Reopen the infraEnv object and remove the .spec.kernelArguments stanza. Additional resources Modifying an infrastructure environment 15.3. Correcting a host's boot order Once the installation that runs as part of the Discovery Image completes, the Assisted Installer reboots the host. The host must boot from its installation disk to continue forming the cluster. If you have not correctly configured the host's boot order, it will boot from another disk instead, interrupting the installation. If the host boots the discovery image again, the Assisted Installer will immediately detect this event and set the host's status to Installing Pending User Action . Alternatively, if the Assisted Installer does not detect that the host has booted the correct disk within the allotted time, it will also set this host status. Procedure Reboot the host and set its boot order to boot from the installation disk. If you didn't select an installation disk, the Assisted Installer selected one for you. To view the selected installation disk, click to expand the host's information in the host inventory, and check which disk has the "Installation disk" role. 15.4. Rectifying partially-successful installations There are cases where the Assisted Installer declares an installation to be successful even though it encountered errors: If you requested to install OLM operators and one or more failed to install, log in to the cluster's console to remediate the failures. If you requested to install more than two worker nodes and at least one failed to install, but at least two succeeded, add the failed workers to the installed cluster. 15.5. API connectivity failure when adding nodes to a cluster When you add a node to an existing cluster as part of Day 2 operations, the node downloads the ignition configuration file from the Day 1 cluster. If the download fails and the node is unable to connect to the cluster, the status of the host in the Host discovery step changes to Insufficient . Clicking this status displays the following error message: The host failed to download the ignition file from <URL>. You must ensure the host can reach the URL. Check your DNS and network configuration or update the IP address or domain used to reach the cluster. error: ignition file download failed.... no route to host There are several possible reasons for the connectivity failure. Here are some recommended actions. Procedure Check the IP address and domain name of the cluster: Click the set the IP or domain used to reach the cluster hyperlink. In the Update cluster hostname window, enter the correct IP address or domain name for the cluster. Check your DNS settings to ensure that the DNS can resolve the domain that you provided. Ensure that port 22624 is open in all firewalls. Check the agent logs of the host to verify that the agent can access the Assisted Service via SSH: USD sudo journalctl TAG=agent Note For more details, see Verify the agent can access the Assisted Service .
[ "ssh core@<host_ip_address>", "ssh -i <ssh_private_key_file> core@<host_ip_address>", "sudo journalctl -u agent.service", "sudo journalctl TAG=agent", "spec: clusterRef: name: sno1 namespace: sno1 cpuArchitecture: x86_64 ipxeScriptType: DiscoveryImageAlways kernelArguments: - operation: append value: rd.break=initqueue 1 nmStateConfigLabelSelector: matchLabels: nmstate-label: sno1 pullSecretRef: name: assisted-deployment-pull-secret", "journalctl -p err //Sorts logs by errors journalctl -p crit //Sorts logs by critical errors journalctl -p warning //Sorts logs by warnings", "nmcli conn show", "cat /etc/assisted/network/host0/eno3.nmconnection", "The host failed to download the ignition file from <URL>. You must ensure the host can reach the URL. Check your DNS and network configuration or update the IP address or domain used to reach the cluster. error: ignition file download failed.... no route to host", "sudo journalctl TAG=agent" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_troubleshooting
3.7. Starting a Process in a Control Group
3.7. Starting a Process in a Control Group Launch processes in a manually created cgroup by running the cgexec command. The syntax for cgexec is: where: controllers is a comma-separated list of controllers, or /* to launch the process in the hierarchies associated with all available subsystems. Note that, as with the cgset command described in Section 3.5, "Setting Cgroup Parameters" , if cgroups of the same name exist, the -g option creates processes in each of those groups. path_to_cgroup is the path to the cgroup relative to the hierarchy; command is the command to be executed in the cgroup; arguments are any arguments for the command. It is also possible to add the --sticky option before the command to keep any child processes in the same cgroup. If you do not set this option and the cgred service is running, child processes will be allocated to cgroups based on the settings found in /etc/cgrules.conf . The process itself, however, will remain in the cgroup in which you started it.
[ "cgexec -g controllers : path_to_cgroup command arguments" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/starting_a_process
Red Hat Quay Operator features
Red Hat Quay Operator features Red Hat Quay 3.12 Advanced Red Hat Quay Operator features Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/red_hat_quay_operator_features/index
Chapter 2. Performance Monitoring Tools
Chapter 2. Performance Monitoring Tools This chapter describes tools used to monitor guest virtual machine environments. 2.1. perf kvm You can use the perf command with the kvm option to collect and analyze guest operating system statistics from the host. The perf package provides the perf command. It is installed by running the following command: In order to use perf kvm in the host, you must have access to the /proc/modules and /proc/kallsyms files from the guest. See Procedure 2.1, "Copying /proc files from guest to host" to transfer the files into the host and run reports on the files. Procedure 2.1. Copying /proc files from guest to host Important If you directly copy the required files (for instance, using scp ) you will only copy files of zero length. This procedure describes how to first save the files in the guest to a temporary location (with the cat command), and then copy them to the host for use by perf kvm . Log in to the guest and save files Log in to the guest and save /proc/modules and /proc/kallsyms to a temporary location, /tmp : Copy the temporary files to the host Once you have logged off from the guest, run the following example scp commands to copy the saved files to the host. You should substitute your host name and TCP port if they are different: You now have two files from the guest ( guest-kallsyms and guest-modules ) on the host, ready for use by perf kvm . Recording and reporting events with perf kvm Using the files obtained in the steps, recording and reporting of events in the guest, the host, or both is now possible. Run the following example command: Note If both --host and --guest are used in the command, output will be stored in perf.data.kvm . If only --host is used, the file will be named perf.data.host . Similarly, if only --guest is used, the file will be named perf.data.guest . Pressing Ctrl-C stops recording. Reporting events The following example command uses the file obtained by the recording process, and redirects the output into a new file, analyze . View the contents of the analyze file to examine the recorded events: # cat analyze # Events: 7K cycles # # Overhead Command Shared Object Symbol # ........ ............ ................. ......................... # 95.06% vi vi [.] 0x48287 0.61% init [kernel.kallsyms] [k] intel_idle 0.36% vi libc-2.12.so [.] _wordcopy_fwd_aligned 0.32% vi libc-2.12.so [.] __strlen_sse42 0.14% swapper [kernel.kallsyms] [k] intel_idle 0.13% init [kernel.kallsyms] [k] uhci_irq 0.11% perf [kernel.kallsyms] [k] generic_exec_single 0.11% init [kernel.kallsyms] [k] tg_shares_up 0.10% qemu-kvm [kernel.kallsyms] [k] tg_shares_up [output truncated...]
[ "yum install perf", "cat /proc/modules > /tmp/modules cat /proc/kallsyms > /tmp/kallsyms", "scp root@GuestMachine:/tmp/kallsyms guest-kallsyms scp root@GuestMachine:/tmp/modules guest-modules", "perf kvm --host --guest --guestkallsyms=guest-kallsyms --guestmodules=guest-modules record -a -o perf.data", "perf kvm --host --guest --guestmodules=guest-modules report -i perf.data.kvm --force > analyze", "cat analyze Events: 7K cycles # Overhead Command Shared Object Symbol ........ ............ ................. ...................... # 95.06% vi vi [.] 0x48287 0.61% init [kernel.kallsyms] [k] intel_idle 0.36% vi libc-2.12.so [.] _wordcopy_fwd_aligned 0.32% vi libc-2.12.so [.] __strlen_sse42 0.14% swapper [kernel.kallsyms] [k] intel_idle 0.13% init [kernel.kallsyms] [k] uhci_irq 0.11% perf [kernel.kallsyms] [k] generic_exec_single 0.11% init [kernel.kallsyms] [k] tg_shares_up 0.10% qemu-kvm [kernel.kallsyms] [k] tg_shares_up [output truncated...]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/chap-Virtualization_Tuning_Optimization_Guide-Monitoring_Tools
Chapter 12. Red Hat Enterprise Linux Atomic Host 7.6.6
Chapter 12. Red Hat Enterprise Linux Atomic Host 7.6.6 12.1. Atomic Host OStree update : New Tree Version: 7.6.6 (hash: 33bb37a7d207ce653eab70306d18deea7daf444b6b7f7aeadef722f96d7e8e6d) Changes since Tree Version 7.6.5 (hash: 5b1058baee886a346301af3b250e51cd6deef7344206afff638d07d5b73b34da) Updated packages : microdnf-2-8.el7 12.2. Extras Updated packages : buildah-1.9.0-1.el7 container-selinux-2.107-1.el7_6 containernetworking-plugins-0.8.1-1.el7 docker-1.13.1-102.git7f2769b.el7 oci-umount-2.5-1.el7_6 podman-1.4.4-2.el7 runc-1.0.0-64.rc8.el7 skopeo-0.1.37-1.el7 12.2.1. Container Images Updated : Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux 7.6 Container Image (rhel7.6, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux 7.6 Container Image for aarch64 (rhel7.6, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux Atomic Image (rhel-atomic, rhel7-atomic, rhel7/rhel-atomic) Red Hat Enterprise Linux Atomic Net-SNMP Container Image (rhel7/net-snmp) Red Hat Enterprise Linux Atomic OpenSCAP Container Image (rhel7/openscap) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) Red Hat Enterprise Linux Atomic Support Tools Container Image (rhel7/support-tools) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Universal Base Image 7 Container Image (rhel7/ubi7) Red Hat Universal Base Image 7 Init Container Image (rhel7/ubi7-init) Red Hat Universal Base Image 7 Minimal Container Image (rhel7/ubi7-minimal) 12.3. New Features docker-latest no longer available The docker-latest package is no longer available beginning with RHEL Atomic 7.6.6. Only the docker package is available.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_6_6
15.23. Monitoring the Replication Topology
15.23. Monitoring the Replication Topology Use the dsconf replication monitor command to display the replication status, as well as additional information, such as replica IDs and Change State Numbers (CSN) on suppliers, consumers, and hubs: 15.23.1. Setting Credentials for Replication Monitoring in the .dsrc File By default, the dsconf replication monitor command prompts for bind DNs and passwords when authenticating to remote instances. Alternatively, you can set the bind DNs, and optionally passwords, for each server in the topology in the user's ~/.dsrc file. Example 15.1. An Example .dsrc File with Explanations of the Different Fields The following is an example ~/.dsrc file: This example uses connection1 to connection3 as keys for each entry. However, you can use any key as long as it is unique. If you run the dsconf replication monitor command, the dsconf utility connects to all servers configured in replication agreements of the instance. If the utility finds the host name in ~/.dsrc , it uses the defined credentials to authenticate to the remote server. In the example above, dsconf uses the following credentials when connecting to a server: Host name Bind DN Password server1.example.com cn=Directory Manager Prompts for the password server2.example.com cn=Directory Manager Reads password from ~/pwd.txt hub1.example.com cn=Directory Manager S3cret 15.23.2. Using Aliases in the Replication Topology Monitoring Output By default, the dsconf replication monitor command displays the host names of servers in the monitoring report. Alternatively, you can display aliases using one of the following methods: Define the aliases in the ~/.dsrc file: Define the aliases by passing the -a alias = host_name : port parameter to the dsconf replication monitor command: In both cases, the command displays the alias in the command's output:
[ "dsconf -D \"cn=Directory Manager\" ldap://supplier.example.com replication monitor Enter password for cn=Directory Manager on ldap://supplier.example.com: password Enter a bind DN for consumer.example.com:389: cn=Directory Manager Enter a password for cn=Directory Manager on consumer.example.com:389: password Supplier: server.example.com:389 -------------------------------- Replica Root: dc=example,dc=com Replica ID: 1 Replica Status: Available Max CSN: 5e3acb77001d00010000 Status For Agreement: \"example-agreement\" (consumer.example.com:389) Replica Enabled: on Update In Progress: FALSE Last Update Start: 20200205140439Z Last Update End: 20200205140440Z Number Of Changes Sent: 1:166/0 Number Of Changes Skipped: None Last Update Status: Error (0) Replica acquired successfully: Incremental update succeeded Last Init Start: 20200205133709Z Last Init End: 20200205133711Z Last Init Status: Error (0) Total update succeeded Reap Active: 0 Replication Status: In Synchronization Replication Lag Time: 00:00:00 Supplier: consumer.example.com:389 ----------------------------------- Replica Root: dc=example,dc=com Replica ID: 65535 Replica Status: Available Max CSN: 00000000000000000000", "[repl-monitor-connections] connection1 = server1.example.com:389:cn=Directory Manager:* connection2 = server2.example.com:389:cn=Directory Manager:[~/pwd.txt] connection3 = hub1.example.com:389:cn=Directory Manager:S3cret", "[repl-monitor-aliases] M1 = server1.example.com:389 M2 = server2.example.com:389", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com replication monitor -a M1=server1.example.com:389 M2=server2.example.com:389", "Supplier: M1 (server1.example.com:389) Supplier: M2 (server2.example.com:389)" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/monitoring-the-replication-topology
Chapter 2. Monitoring
Chapter 2. Monitoring 2.1. Monitoring with GitOps dashboards You can access a graphical view of GitOps instances with Red Hat OpenShift GitOps monitoring dashboards to observe the behavior and usage of each instance across the cluster. There are three GitOps dashboards available: GitOps Overview : See an overview of all GitOps instances installed on the cluster, including the number of applications, health and sync status, application and sync activity. GitOps Components : View detailed information, such as CPU or memory, for application-controller, repo-server, server, and other GitOps components. GitOps gRPC Services : View metrics related to gRPC service activity between the various components in Red Hat OpenShift GitOps. 2.1.1. Accessing GitOps monitoring dashboards The monitoring dashboards are deployed automatically by the Operator. You can access GitOps monitoring dashboards from the Administrator perspective of the OpenShift Container Platform web console. Note Disabling or changing the content of the dashboards is not supported. Prerequisites You have access to the OpenShift Container Platform web console. The Red Hat OpenShift GitOps Operator is installed in the default namespace, openshift-gitops-operator . The cluster monitoring is enabled on the openshift-gitops-operator namespace. You have installed an Argo CD application in your defined namespace, for example, openshift-gitops . Procedure In the Administrator perspective of the web console, go to Observe Dashboards . From the Dashboard drop-down list, select the desired GitOps dashboard: GitOps (Overview) , GitOps / Components , or GitOps / gRPC Services . Optional: Choose a specific namespace, cluster, and interval from the Namespace , Cluster , and Interval drop-down lists. View the desired GitOps metrics in the GitOps dashboard. 2.2. Monitoring Argo CD instances By default, the Red Hat OpenShift GitOps Operator automatically detects an installed Argo CD instance in your defined namespace, for example, openshift-gitops , and connects it to the monitoring stack of the cluster to provide alerts for out-of-sync applications. 2.2.1. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have installed the Red Hat OpenShift GitOps Operator in your cluster. You have installed an Argo CD application in your defined namespace, for example, openshift-gitops . 2.2.2. Monitoring Argo CD health using Prometheus metrics You can monitor the health status of an Argo CD application by running Prometheus metrics queries against it. Procedure In the Developer perspective of the web console, select the namespace where your Argo CD application is installed, and navigate to Observe Metrics . From the Select query drop-down list, select Custom query . To check the health status of your Argo CD application, enter the Prometheus Query Language (PromQL) query similar to the following example in the Expression field: Example sum(argocd_app_info{dest_namespace=~"<your_defined_namespace>",health_status!=""}) by (health_status) 1 1 Replace the <your_defined_namespace> variable with the actual name of your defined namespace, for example openshift-gitops . 2.3. Monitoring the GitOps Operator performance The Red Hat OpenShift GitOps Operator emits metrics about its performance. With the OpenShift monitoring stack that picks up these metrics, you can monitor and analyze the Operator's performance. The Operator exposes the following metrics, which you can view by using the OpenShift Container Platform web console: Table 2.1. GitOps Operator performance metrics Metric name Type Description active_argocd_instances_total Gauge The total number of active Argo CD instances currently managed by the Operator across the cluster at a given time. active_argocd_instances_by_phase Gauge The number of active Argo CD instances in a given phase, such as pending, or available. active_argocd_instance_reconciliation_count Counter The total number of reconciliations that have occurred for an instance in a given namespace at any given time. controller_runtime_reconcile_time_seconds_per_instance_bucket Counter The number of reconciliation cycles completed under given time durations for an instance. For example, controller_runtime_reconcile_time_seconds_per_instance_bucket{le="0.5"} shows the number of reconciliations that took under 0.5 seconds to complete for a given instance. controller_runtime_reconcile_time_seconds_per_instance_count Counter The total number of reconciliation cycles observed for a given instance. controller_runtime_reconcile_time_seconds_per_instance_sum Counter The total amount of time taken for the observed reconciliations for a given instance. Note Gauge is a value that can go up or down. Counter is a value that can only go up. 2.3.1. Accessing the GitOps Operator metrics You can access the Operator metrics from the Administrator perspective of the OpenShift Container Platform web console to track the performance of the Operator. Prerequisites You have access to the OpenShift Container Platform web console. The Red Hat OpenShift GitOps Operator is installed in the default openshift-gitops-operator namespace. The cluster monitoring is enabled on the openshift-gitops-operator namespace. Procedure In the Administrator perspective of the web console, go to Observe Metrics . Enter the metric in the Expression field. You can choose from the following metrics: active_argocd_instances_total active_argocd_instances_by_phase active_argocd_instance_reconciliation_count controller_runtime_reconcile_time_seconds_per_instance_bucket controller_runtime_reconcile_time_seconds_per_instance_count controller_runtime_reconcile_time_seconds_per_instance_sum (Optional): Filter the metric by its properties. For example, filter the active_argocd_instances_by_phase metric by the Available phase: Example active_argocd_instances_by_phase{phase="Available"} (Optional): Click Add query to enter multiple queries. Click Run queries to enable and observe the GitOps Operator metrics. 2.3.2. Additional resources Installing Red Hat OpenShift GitOps Operator in web console 2.4. Monitoring health information for application resources and deployments The Red Hat OpenShift GitOps Environments page in the Developer perspective of the OpenShift Container Platform web console shows a list of the successful deployments of the application environments, along with links to the revision for each deployment. The Application environments page in the Developer perspective of the OpenShift Container Platform web console displays the health status of the application resources, such as routes, synchronization status, deployment configuration, and deployment history. The environments pages in the Developer perspective of the OpenShift Container Platform web console are decoupled from the Red Hat OpenShift GitOps Application Manager command-line interface (CLI), kam . You do not have to use kam to generate Application Environment manifests for the environments to show up in the Developer perspective of the OpenShift Container Platform web console. You can use your own manifests, but the environments must still be represented by namespaces. In addition, specific labels and annotations are still needed. 2.4.1. Settings for environment labels and annotations This section provides reference settings for environment labels and annotations required to display an environment application in the Environments page, in the Developer perspective of the OpenShift Container Platform web console. Environment labels The environment application manifest must contain labels.openshift.gitops/environment and destination.namespace fields. You must set identical values for the <environment_name> variable and the name of the environment application manifest. Specification of the environment application manifest spec: labels: openshift.gitops/environment: <environment_name> destination: namespace: <environment_name> # ... Example of an environment application manifest apiVersion: argoproj.io/v1beta1 kind: Application metadata: name: dev-env 1 namespace: openshift-gitops spec: labels: openshift.gitops/environment: dev-env destination: namespace: dev-env # ... 1 The name of the environment application manifest. The value set is the same as the value of the <environment_name> variable. Environment annotations The environment namespace manifest must contain the annotations.app.openshift.io/vcs-uri and annotations.app.openshift.io/vcs-ref fields to specify the version controller code source of the application. You must set identical values for the <environment_name> variable and the name of the environment namespace manifest. Specification of the environment namespace manifest apiVersion: v1 kind: Namespace metadata: annotations: app.openshift.io/vcs-uri: <application_source_url> app.openshift.io/vcs-ref: <branch_reference> name: <environment_name> 1 # ... 1 The name of the environment namespace manifest. The value set is the same as the value of the <environment_name> variable. Example of an environment namespace manifest apiVersion: v1 kind: Namespace metadata: annotations: app.openshift.io/vcs-uri: https://example.com/<your_domain>/<your_gitops.git> app.openshift.io/vcs-ref: main labels: argocd.argoproj.io/managed-by: openshift-gitops name: dev-env # ... 2.4.2. Checking health information The Red Hat OpenShift GitOps Operator will install the GitOps backend service in the openshift-gitops namespace. Prerequisites The Red Hat OpenShift GitOps Operator is installed from OperatorHub . Ensure that your applications are synchronized by Argo CD. Procedure Click Environments under the Developer perspective. The Environments page shows the list of applications along with their Environment status . Hover over the icons under the Environment status column to see the synchronization status of all the environments. Click the application name from the list to view the details of a specific application. In the Application environments page, if the Resources section under the Overview tab displays icons, hover over the icons to get status details. A broken heart indicates that resource issues have degraded the application's performance. A yellow yield sign indicates that resource issues have delayed data about the application's health. To view the deployment history of an application, click the Deployment History tab. The page includes details such as the Last deployment , Description (commit message), Environment , Author , and Revision . 2.5. Monitoring Argo CD custom resource workloads With Red Hat OpenShift GitOps, you can monitor the availability of Argo CD custom resource workloads for specific Argo CD instances. By monitoring Argo CD custom resource workloads, you have the latest information about the state of your Argo CD instances by enabling alerts for them. When the component workload pods such as application-controller, repo-server, or server of the corresponding Argo CD instance are unable to come up for certain reasons and there is a drift between the number of ready replicas and the number of desired replicas for a certain period of time, the Operator then triggers the alerts. You can enable and disable the setting for monitoring Argo CD custom resource workloads. 2.5.1. Prerequisites You have access to the cluster as a user with the cluster-admin role. Red Hat OpenShift GitOps is installed in your cluster. The monitoring stack is configured in your cluster in the openshift-monitoring project. In addition, the Argo CD instance is in a namespace that you can monitor through Prometheus. The kube-state-metrics service is running on your cluster. Optional: If you are enabling monitoring for an Argo CD instance already present in a user-defined project, ensure that the monitoring is enabled for user-defined projects in your cluster. Note If you want to enable monitoring for an Argo CD instance in a namespace that is not watched by the default openshift-monitoring stack, for example, any namespace that does not start with openshift-* , then you must enable user workload monitoring in your cluster. This action enables the monitoring stack to pick up the created PrometheusRule. 2.5.2. Enabling Monitoring for Argo CD custom resource workloads By default, the monitoring configuration for Argo CD custom resource workloads is set to false . With Red Hat OpenShift GitOps, you can enable workload monitoring for specific Argo CD instances. As a result, the Operator creates a PrometheusRule object that contains alert rules for all the workloads managed by the specific Argo CD instances. These alert rules trigger the firing of an alert when the replica count of the corresponding component has drifted from the desired state for a certain amount of time. The Operator will not overwrite the changes made to the PrometheusRule object by the users. Procedure Set the .spec.monitoring.enabled field value to true on a given Argo CD instance: Example Argo CD custom resource apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: repo spec: # ... monitoring: enabled: true # ... Verify whether an alert rule is included in the PrometheusRule created by the Operator: Example alert rule apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: argocd-component-status-alert namespace: openshift-gitops spec: groups: - name: ArgoCDComponentStatus rules: # ... - alert: ApplicationSetControllerNotReady 1 annotations: message: >- applicationSet controller deployment for Argo CD instance in namespace "default" is not running expr: >- kube_statefulset_status_replicas{statefulset="openshift-gitops-application-controller statefulset", namespace="openshift-gitops"} != kube_statefulset_status_replicas_ready{statefulset="openshift-gitops-application-controller statefulset", namespace="openshift-gitops"} for: 1m labels: severity: critical 1 Alert rule in the PrometheusRule that checks whether the workloads created by the Argo CD instances are running as expected. 2.5.3. Disabling Monitoring for Argo CD custom resource workloads You can disable workload monitoring for specific Argo CD instances. Disabling workload monitoring deletes the created PrometheusRule. Procedure Set the .spec.monitoring.enabled field value to false on a given Argo CD instance: Example Argo CD custom resource apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: repo spec: # ... monitoring: enabled: false # ... 2.5.4. Additional resources Enabling monitoring for user-defined projects
[ "sum(argocd_app_info{dest_namespace=~\"<your_defined_namespace>\",health_status!=\"\"}) by (health_status) 1", "active_argocd_instances_by_phase{phase=\"Available\"}", "spec: labels: openshift.gitops/environment: <environment_name> destination: namespace: <environment_name>", "apiVersion: argoproj.io/v1beta1 kind: Application metadata: name: dev-env 1 namespace: openshift-gitops spec: labels: openshift.gitops/environment: dev-env destination: namespace: dev-env", "apiVersion: v1 kind: Namespace metadata: annotations: app.openshift.io/vcs-uri: <application_source_url> app.openshift.io/vcs-ref: <branch_reference> name: <environment_name> 1", "apiVersion: v1 kind: Namespace metadata: annotations: app.openshift.io/vcs-uri: https://example.com/<your_domain>/<your_gitops.git> app.openshift.io/vcs-ref: main labels: argocd.argoproj.io/managed-by: openshift-gitops name: dev-env", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: repo spec: # monitoring: enabled: true #", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: argocd-component-status-alert namespace: openshift-gitops spec: groups: - name: ArgoCDComponentStatus rules: # - alert: ApplicationSetControllerNotReady 1 annotations: message: >- applicationSet controller deployment for Argo CD instance in namespace \"default\" is not running expr: >- kube_statefulset_status_replicas{statefulset=\"openshift-gitops-application-controller statefulset\", namespace=\"openshift-gitops\"} != kube_statefulset_status_replicas_ready{statefulset=\"openshift-gitops-application-controller statefulset\", namespace=\"openshift-gitops\"} for: 1m labels: severity: critical", "apiVersion: argoproj.io/v1beta1 kind: ArgoCD metadata: name: example-argocd labels: example: repo spec: # monitoring: enabled: false #" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.11/html/observability/monitoring
Service Registry User Guide
Service Registry User Guide Red Hat Integration 2023.q4 Manage schemas and APIs in Service Registry 2.5 Red Hat Integration Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/service_registry_user_guide/index
20.40. Guest Virtual Machine CPU Model Configuration
20.40. Guest Virtual Machine CPU Model Configuration 20.40.1. Introduction Every hypervisor has its own policy for what a guest virtual machine will see for its CPUs by default. Whereas some hypervisors decide which CPU host physical machine features will be available for the guest virtual machine, QEMU/KVM presents the guest virtual machine with a generic model named qemu32 or qemu64 . These hypervisors perform more advanced filtering, classifying all physical CPUs into a handful of groups and have one baseline CPU model for each group that is presented to the guest virtual machine. Such behavior enables the safe migration of guest virtual machines between host physical machines, provided they all have physical CPUs that classify into the same group. libvirt does not typically enforce policy itself, rather it provides the mechanism on which the higher layers define their own required policy. Understanding how to obtain CPU model information and define a suitable guest virtual machine CPU model is critical to ensure guest virtual machine migration is successful between host physical machines. Note that a hypervisor can only emulate features that it is aware of and features that were created after the hypervisor was released may not be emulated. 20.40.2. Learning about the Host Physical Machine CPU Model The virsh capabilities command displays an XML document describing the capabilities of the hypervisor connection and host physical machine. The XML schema displayed has been extended to provide information about the host physical machine CPU model. One of the challenges in describing a CPU model is that every architecture has a different approach to exposing their capabilities. QEMU/KVM and libvirt use a scheme which combines a CPU model name string, with a set of named flags. It is not practical to have a database listing all known CPU models, so libvirt has a small list of baseline CPU model names. It chooses the one that shares the greatest number of CPUID bits with the actual host physical machine CPU and then lists the remaining bits as named features. Notice that libvirt does not display which features the baseline CPU contains. This might seem like a flaw at first, but as will be explained in this section, it is not actually necessary to know this information. 20.40.3. Determining Support for VFIO IOMMU Devices Use the virsh domcapabilities command to determine support for VFIO. See the following example output: # virsh domcapabilities [...output truncated...] <enum name='pciBackend'> <value>default</value> <value>vfio</value> [...output truncated...] Figure 20.3. Determining support for VFIO 20.40.4. Determining a Compatible CPU Model to Suit a Pool of Host Physical Machines Now that it is possible to find out what CPU capabilities a single host physical machine has, the step is to determine what CPU capabilities are best to expose to the guest virtual machine. If it is known that the guest virtual machine will never need to be migrated to another host physical machine, the host physical machine CPU model can be passed straight through unmodified. A virtualized data center may have a set of configurations that can guarantee all servers will have 100% identical CPUs. Again the host physical machine CPU model can be passed straight through unmodified. The more common case, though, is where there is variation in CPUs between host physical machines. In this mixed CPU environment, the lowest common denominator CPU must be determined. This is not entirely straightforward, so libvirt provides an API for exactly this task. If libvirt is provided a list of XML documents, each describing a CPU model for a host physical machine, libvirt will internally convert these to CPUID masks, calculate their intersection, and convert the CPUID mask result back into an XML CPU description. Here is an example of what libvirt reports as the capabilities on a basic workstation, when the virsh capabilities is executed: <capabilities> <host> <cpu> <arch>i686</arch> <model>pentium3</model> <topology sockets='1' cores='2' threads='1'/> <feature name='lahf_lm'/> <feature name='lm'/> <feature name='xtpr'/> <feature name='cx16'/> <feature name='ssse3'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='pni'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='sse2'/> <feature name='acpi'/> <feature name='ds'/> <feature name='clflush'/> <feature name='apic'/> </cpu> </host> </capabilities> Figure 20.4. Pulling host physical machine's CPU model information Now compare that to a different server, with the same virsh capabilities command: <capabilities> <host> <cpu> <arch>x86_64</arch> <model>phenom</model> <topology sockets='2' cores='4' threads='1'/> <feature name='osvw'/> <feature name='3dnowprefetch'/> <feature name='misalignsse'/> <feature name='sse4a'/> <feature name='abm'/> <feature name='cr8legacy'/> <feature name='extapic'/> <feature name='cmp_legacy'/> <feature name='lahf_lm'/> <feature name='rdtscp'/> <feature name='pdpe1gb'/> <feature name='popcnt'/> <feature name='cx16'/> <feature name='ht'/> <feature name='vme'/> </cpu> ...snip... Figure 20.5. Generate CPU description from a random server To see if this CPU description is compatible with the workstation CPU description, use the virsh cpu-compare command. The reduced content was stored in a file named virsh-caps-workstation-cpu-only.xml and the virsh cpu-compare command can be executed on this file: As seen in this output, libvirt is correctly reporting that the CPUs are not strictly compatible. This is because there are several features in the server CPU that are missing in the client CPU. To be able to migrate between the client and the server, it will be necessary to open the XML file and comment out some features. To determine which features need to be removed, run the virsh cpu-baseline command, on the both-cpus.xml which contains the CPU information for both machines. Running # virsh cpu-baseline both-cpus.xml results in: <cpu match='exact'> <model>pentium3</model> <feature policy='require' name='lahf_lm'/> <feature policy='require' name='lm'/> <feature policy='require' name='cx16'/> <feature policy='require' name='monitor'/> <feature policy='require' name='pni'/> <feature policy='require' name='ht'/> <feature policy='require' name='sse2'/> <feature policy='require' name='clflush'/> <feature policy='require' name='apic'/> </cpu> Figure 20.6. Composite CPU baseline This composite file shows which elements are in common. Everything that is not in common should be commented out.
[ "virsh domcapabilities [...output truncated...] <enum name='pciBackend'> <value>default</value> <value>vfio</value> [...output truncated...]", "<capabilities> <host> <cpu> <arch>i686</arch> <model>pentium3</model> <topology sockets='1' cores='2' threads='1'/> <feature name='lahf_lm'/> <feature name='lm'/> <feature name='xtpr'/> <feature name='cx16'/> <feature name='ssse3'/> <feature name='tm2'/> <feature name='est'/> <feature name='vmx'/> <feature name='ds_cpl'/> <feature name='monitor'/> <feature name='pni'/> <feature name='pbe'/> <feature name='tm'/> <feature name='ht'/> <feature name='ss'/> <feature name='sse2'/> <feature name='acpi'/> <feature name='ds'/> <feature name='clflush'/> <feature name='apic'/> </cpu> </host> </capabilities>", "<capabilities> <host> <cpu> <arch>x86_64</arch> <model>phenom</model> <topology sockets='2' cores='4' threads='1'/> <feature name='osvw'/> <feature name='3dnowprefetch'/> <feature name='misalignsse'/> <feature name='sse4a'/> <feature name='abm'/> <feature name='cr8legacy'/> <feature name='extapic'/> <feature name='cmp_legacy'/> <feature name='lahf_lm'/> <feature name='rdtscp'/> <feature name='pdpe1gb'/> <feature name='popcnt'/> <feature name='cx16'/> <feature name='ht'/> <feature name='vme'/> </cpu> ...snip", "virsh cpu-compare virsh-caps-workstation-cpu-only.xml Host physical machine CPU is a superset of CPU described in virsh-caps-workstation-cpu-only.xml", "<cpu match='exact'> <model>pentium3</model> <feature policy='require' name='lahf_lm'/> <feature policy='require' name='lm'/> <feature policy='require' name='cx16'/> <feature policy='require' name='monitor'/> <feature policy='require' name='pni'/> <feature policy='require' name='ht'/> <feature policy='require' name='sse2'/> <feature policy='require' name='clflush'/> <feature policy='require' name='apic'/> </cpu>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guest_virtual_machines_with_virsh-guest_virtual_machine_cpu_model_configuration
Chapter 2. Configuring private connections
Chapter 2. Configuring private connections 2.1. Configuring private connections for AWS 2.1.1. Understanding AWS cloud infrastructure access Note AWS cloud infrastructure access does not apply to the Customer Cloud Subscription (CCS) infrastructure type that is chosen when you create a cluster because CCS clusters are deployed onto your account. Amazon Web Services (AWS) infrastructure access permits Customer Portal Organization Administrators and cluster owners to enable AWS Identity and Access Management (IAM) users to have federated access to the AWS Management Console for their OpenShift Dedicated cluster. AWS access can be granted for customer AWS users, and private cluster access can be implemented to suit the needs of your OpenShift Dedicated environment. Get started with configuring AWS infrastructure access for your OpenShift Dedicated cluster. By creating an AWS user and account and providing that user with access to the OpenShift Dedicated AWS account. After you have access to the OpenShift Dedicated AWS account, use one or more of the following methods to establish a private connection to your cluster: Configuring AWS VPC peering: Enable VPC peering to route network traffic between two private IP addresses. Configuring AWS VPN: Establish a Virtual Private Network to securely connect your private network to your Amazon Virtual Private Cloud. Configuring AWS Direct Connect: Configure AWS Direct Connect to establish a dedicated network connection between your private network and an AWS Direct Connect location. After configuring your cloud infrastructure access, learn more about Configuring a private cluster. 2.1.2. Configuring AWS infrastructure access Amazon Web Services (AWS) infrastructure access allows Customer Portal Organization Administrators and cluster owners to enable AWS Identity and Access Management (IAM) users to have federated access to the AWS Management Console for their OpenShift Dedicated cluster. Administrators can select between Network Management or Read-only access options. Prerequisites An AWS account with IAM permissions. Procedure Log in to your AWS account. If necessary, you can create a new AWS account by following the AWS documentation . Create an IAM user with STS:AllowAssumeRole permissions within the AWS account. Open the IAM dashboard of the AWS Management Console. In the Policies section, click Create Policy . Select the JSON tab and replace the existing text with the following: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "sts:AssumeRole", "Resource": "*" } ] } Click :Tags . Optional: Add tags. Click :Review Provide an appropriate name and description, then click Create Policy . In the Users section, click Add user . Provide an appropriate user name. Select AWS Management Console access as the AWS access type. Adjust the password requirements as necessary for your organization, then click :Permissions . Click the Attach existing policies directly option. Search for and check the policy created in steps. Note It is not recommended to set a permissions boundary. Click : Tags , then click : Review . Confirm the configuration is correct. Click Create user , a success page appears. Gather the IAM user's Amazon Resource Name (ARN). The ARN will have the following format: arn:aws:iam::000111222333:user/username . Click Close . Open OpenShift Cluster Manager in your browser and select the cluster you want to allow AWS infrastructure access. Select the Access control tab, and scroll to the AWS Infrastructure Access section. Paste the AWS IAM ARN and select Network Management or Read-only permissions, then click Grant role . Copy the AWS OSD console URL to your clipboard. Sign in to your AWS account with your Account ID or alias, IAM user name, and password. In a new browser tab, paste the AWS OSD Console URL that will be used to route to the AWS Switch Role page. Your account number and role will be filled in already. Choose a display name if necessary, then click Switch Role . Verification You now see VPC under Recently visited services . 2.1.3. Configuring AWS VPC peering A Virtual Private Cloud (VPC) peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. You can configure an Amazon Web Services (AWS) VPC containing an OpenShift Dedicated cluster to peer with another AWS VPC network. Warning Before you attempt to uninstall a cluster, you must remove any VPC peering connections from the cluster's VPC. Failure to do so might result in a cluster not completing the uninstall process. AWS supports inter-region VPC peering between all commercial regions excluding China . Prerequisites Gather the following information about the Customer VPC that is required to initiate the peering request: Customer AWS account number Customer VPC ID Customer VPC Region Customer VPC CIDR Check the CIDR block used by the OpenShift Dedicated Cluster VPC. If it overlaps or matches the CIDR block for the Customer VPC, then peering between these two VPCs is not possible; see the Amazon VPC Unsupported VPC peering configurations documentation for details. If the CIDR blocks do not overlap, you can proceed with the procedure. Procedure Initiate the VPC peering request . Accept the VPC peering request . Update your Route tables for the VPC peering connection . Additional resources For more information and troubleshooting help, see the AWS VPC guide. 2.1.4. Configuring an AWS VPN You can configure an Amazon Web Services (AWS) OpenShift Dedicated cluster to use a customer's on-site hardware Virtual Private Network (VPN) device. By default, instances that you launch into an AWS Virtual Private Cloud (VPC) cannot communicate with your own (remote) network. You can enable access to your remote network from your VPC by creating an AWS Site-to-Site VPN connection, and configuring routing to pass traffic through the connection. Note AWS VPN does not currently provide a managed option to apply NAT to VPN traffic. See the AWS Knowledge Center for more details. Routing all traffic, for example 0.0.0.0/0 , through a private connection is not supported. This requires deleting the internet gateway, which disables SRE management traffic. Prerequisites Hardware VPN gateway device model and software version, for example Cisco ASA running version 8.3. See the AWS documentation to confirm whether your gateway device is supported by AWS. Public, static IP address for the VPN gateway device. BGP or static routing: if BGP, the ASN is required. If static routing, you must configure at least one static route. Optional: IP and port/protocol of a reachable service to test the VPN connection. Procedure Create a customer gateway to configure the VPN connection. If you do not already have a Virtual Private Gateway attached to the intended VPC, create and attach a Virtual Private Gateway. Configure routing and enable VPN route propagation . Update your security group . Establish the Site-to-Site VPN connection . Note Note the VPC subnet information, which you must add to your configuration as the remote network. Additional resources For more information and troubleshooting help, see the AWS VPN guide. 2.1.5. Configuring AWS Direct Connect Amazon Web Services (AWS) Direct Connect requires a hosted Virtual Interface (VIF) connected to a Direct Connect Gateway (DXGateway), which is in turn associated to a Virtual Gateway (VGW) or a Transit Gateway in order to access a remote Virtual Private Cloud (VPC) in the same or another account. If you do not have an existing DXGateway, the typical process involves creating the hosted VIF, with the DXGateway and VGW being created in your AWS account. If you have an existing DXGateway connected to one or more existing VGWs, the process involves your AWS account sending an Association Proposal to the DXGateway owner. The DXGateway owner must ensure that the proposed CIDR will not conflict with any other VGWs they have associated. Prerequisites Confirm the CIDR range of the OpenShift Dedicated VPC will not conflict with any other VGWs you have associated. Gather the following information: The Direct Connect Gateway ID. The AWS Account ID associated with the virtual interface. The BGP ASN assigned for the DXGateway. Optional: the Amazon default ASN may also be used. Procedure Create a VIF or view your existing VIFs to determine the type of direct connection you need to create. Create your gateway. If the Direct Connect VIF type is Private , create a virtual private gateway . If the Direct Connect VIF is Public , create a Direct Connect gateway . If you have an existing gateway you want to use, create an association proposal and send the proposal to the DXGateway owner for approval. Warning When connecting to an existing DXGateway, you are responsible for the costs . Additional resources For more information and troubleshooting help, see the AWS Direct Connect guide. 2.2. Configuring a private cluster An OpenShift Dedicated cluster can be made private so that internal applications can be hosted inside a corporate network. In addition, private clusters can be configured to have only internal API endpoints for increased security. OpenShift Dedicated administrators can choose between public and private cluster configuration from within OpenShift Cluster Manager . Privacy settings can be configured during cluster creation or after a cluster is established. 2.2.1. Enabling a private cluster during cluster creation You can enable private cluster settings when creating a new cluster. Prerequisites The following private connections must be configured to allow private access: VPC Peering Cloud VPN DirectConnect (AWS only) TransitGateway (AWS only) Cloud Interconnect (GCP only) Procedure Log in to OpenShift Cluster Manager . Click Create cluster OpenShift Dedicated Create cluster . Configure your cluster details. When selecting your preferred network configuration, select Advanced . Select Private . Warning When set to Private , you cannot access your cluster unless you have configured the private connections in your cloud provider as outlined in the prerequisites. Click Create cluster . The cluster creation process begins and takes about 30-40 minutes to complete. Verification The Installing cluster heading, under the Overview tab, indicates that the cluster is installing and you can view the installation logs from this heading. The Status indicator under the Details heading indicates when your cluster is Ready for use. 2.2.2. Enabling an existing cluster to be private After a cluster has been created, you can later enable the cluster to be private. Prerequisites The following private connections must be configured to allow private access: VPC Peering Cloud VPN DirectConnect (AWS only) TransitGateway (AWS only) Cloud Interconnect (GCP only) Procedure Log in to OpenShift Cluster Manager . Select the public cluster you would like to make private. On the Networking tab, select Make API private under Control Plane API endpoint . Warning When set to Private , you cannot access your cluster unless you have configured the private connections in your cloud provider as outlined in the prerequisites. Click Change settings . Note Transitioning your cluster between private and public can take several minutes to complete. 2.2.3. Enabling an existing private cluster to be public After a private cluster has been created, you can later enable the cluster to be public. Procedure Log in to OpenShift Cluster Manager . Select the private cluster you would like to make public. On the Networking tab, deselect Make API private under Control Plane API endpoint . Click Change settings . Note Transitioning your cluster between private and public can take several minutes to complete.
[ "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": \"sts:AssumeRole\", \"Resource\": \"*\" } ] }" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/cluster_administration/configuring-private-connections
14.2. Attaching and Updating a Device with virsh
14.2. Attaching and Updating a Device with virsh For information on attaching storage devices refer to Section 13.3.1, "Adding File-based Storage to a Guest" Procedure 14.1. Hot plugging USB devices for use by the guest virtual machine The following procedure demonstrates how to attach USB devices to the guest virtual machine. This can be done while the guest virtual machine is running as a hotplug procedure or it can be done while the guest is shutoff. The device you want to emulate needs to be attached to the host physical machine. Locate the USB device you want to attach with the following command: Create an XML file and give it a logical name ( usb_device.xml , for example). Make sure you copy the vendor and product IDs exactly as was displayed in your search. <hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x17ef'/> <product id='0x480f'/> </source> </hostdev> ... Figure 14.1. USB Devices XML Snippet Attach the device with the following command: In this example [rhel6] is the name of your guest virtual machine and [usb_device.xml] is the file you created in the step. If you want to have the change take effect in the reboot, use the --config option. If you want this change to be persistent, use the --persistent option. If you want the change to take effect on the current domain, use the --current option. See the Virsh man page for additional information. If you want to detach the device (hot unplug), perform the following command: In this example [rhel6] is the name of your guest virtual machine and [usb_device.xml] is the file you attached in the step
[ "lsusb -v idVendor 0x17ef Lenovo idProduct 0x480f Integrated Webcam [R5U877]", "<hostdev mode='subsystem' type='usb' managed='yes'> <source> <vendor id='0x17ef'/> <product id='0x480f'/> </source> </hostdev>", "virsh attach-device rhel6 --file usb_device.xml --config", "virsh detach-device rhel6 --file usb_device.xml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-Managing_guest_virtual_machines_with_virsh-Attaching_and_updating_a_device_with_virsh
9.12. Upgrading an Existing System
9.12. Upgrading an Existing System Important The following sections only apply to upgrading Red Hat Enterprise Linux between minor versions, for example, upgrading Red Hat Enterprise Linux 6.4 to Red Hat Enterprise Linux 6.5 or higher. This approach is not supported for upgrades between major versions, for example, upgrading Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7. In-place upgrades between major versions of Red Hat Enterprise Linux can be done, with certain limitations, using the Red Hat Upgrade Tool and Preupgrade Assistant tools. See Chapter 37, Upgrading Your Current System for more information. The installation system automatically detects any existing installation of Red Hat Enterprise Linux. The upgrade process updates the existing system software with new versions, but does not remove any data from users' home directories. The existing partition structure on your hard drives does not change. Your system configuration changes only if a package upgrade demands it. Most package upgrades do not change system configuration, but rather install an additional configuration file for you to examine later. Note that the installation medium that you are using might not contain all the software packages that you need to upgrade your computer. 9.12.1. The Upgrade Dialog If your system contains a Red Hat Enterprise Linux installation, a dialog appears asking whether you want to upgrade that installation. To perform an upgrade of an existing system, choose the appropriate installation from the drop-down list and select . Figure 9.35. The Upgrade Dialog Note Software you have installed manually on your existing Red Hat Enterprise Linux system may behave differently after an upgrade. You may need to manually reinstall or recompile this software after an upgrade to ensure it performs correctly on the updated system.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-upgrading-system-x86
6.5. Resource Groups
6.5. Resource Groups One of the most common elements of a cluster is a set of resources that need to be located together, start sequentially, and stop in the reverse order. To simplify this configuration, Pacemaker supports the concept of groups. You create a resource group with the following command, specifying the resources to include in the group. If the group does not exist, this command creates the group. If the group exists, this command adds additional resources to the group. The resources will start in the order you specify them with this command, and will stop in the reverse order of their starting order. You can use the --before and --after options of this command to specify the position of the added resources relative to a resource that already exists in the group. You can also add a new resource to an existing group when you create the resource, using the following command. The resource you create is added to the group named group_name . You remove a resource from a group with the following command. If there are no resources in the group, this command removes the group itself. The following command lists all currently configured resource groups. The following example creates a resource group named shortcut that contains the existing resources IPaddr and Email . There is no limit to the number of resources a group can contain. The fundamental properties of a group are as follows. Resources are started in the order in which you specify them (in this example, IPaddr first, then Email ). Resources are stopped in the reverse order in which you specify them. ( Email first, then IPaddr ). If a resource in the group cannot run anywhere, then no resource specified after that resource is allowed to run. If IPaddr cannot run anywhere, neither can Email . If Email cannot run anywhere, however, this does not affect IPaddr in any way. Obviously as the group grows bigger, the reduced configuration effort of creating resource groups can become significant. 6.5.1. Group Options A resource group inherits the following options from the resources that it contains: priority , target-role , is-managed . For information on resource options, see Table 6.3, "Resource Meta Options" . 6.5.2. Group Stickiness Stickiness, the measure of how much a resource wants to stay where it is, is additive in groups. Every active resource of the group will contribute its stickiness value to the group's total. So if the default resource-stickiness is 100, and a group has seven members, five of which are active, then the group as a whole will prefer its current location with a score of 500.
[ "pcs resource group add group_name resource_id [ resource_id ] ... [ resource_id ] [--before resource_id | --after resource_id ]", "pcs resource create resource_id standard:provider:type|type [resource_options] [op operation_action operation_options ] --group group_name", "pcs resource group remove group_name resource_id", "pcs resource group list", "pcs resource group add shortcut IPaddr Email" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-resourcegroups-haar
Chapter 64. Deprecated Adapters
Chapter 64. Deprecated Adapters The following adapters continue to be supported until the end of life of Red Hat Enterprise Linux 7 but will likely not be supported in future major releases of this product and are not recommended for new deployments. Other adapters from the mentioned drivers that are not listed here remain unchanged. PCI IDs are in the format of vendor:device:subvendor:subdevice . If the subdevice or subvendor:subdevice entry is not listed, devices with any values of such missing entries have been deprecated. To check the PCI IDs of the hardware on your system, run the lspci -nn command. The following adapters from the aacraid driver have been deprecated: PERC 2/Si (Iguana/PERC2Si), PCI ID 0x1028:0x0001:0x1028:0x0001 PERC 3/Di (Opal/PERC3Di), PCI ID 0x1028:0x0002:0x1028:0x0002 PERC 3/Si (SlimFast/PERC3Si), PCI ID 0x1028:0x0003:0x1028:0x0003 PERC 3/Di (Iguana FlipChip/PERC3DiF), PCI ID 0x1028:0x0004:0x1028:0x00d0 PERC 3/Di (Viper/PERC3DiV), PCI ID 0x1028:0x0002:0x1028:0x00d1 PERC 3/Di (Lexus/PERC3DiL), PCI ID 0x1028:0x0002:0x1028:0x00d9 PERC 3/Di (Jaguar/PERC3DiJ), PCI ID 0x1028:0x000a:0x1028:0x0106 PERC 3/Di (Dagger/PERC3DiD), PCI ID 0x1028:0x000a:0x1028:0x011b PERC 3/Di (Boxster/PERC3DiB), PCI ID 0x1028:0x000a:0x1028:0x0121 catapult, PCI ID 0x9005:0x0283:0x9005:0x0283 tomcat, PCI ID 0x9005:0x0284:0x9005:0x0284 Adaptec 2120S (Crusader), PCI ID 0x9005:0x0285:0x9005:0x0286 Adaptec 2200S (Vulcan), PCI ID 0x9005:0x0285:0x9005:0x0285 Adaptec 2200S (Vulcan-2m), PCI ID 0x9005:0x0285:0x9005:0x0287 Legend S220 (Legend Crusader), PCI ID 0x9005:0x0285:0x17aa:0x0286 Legend S230 (Legend Vulcan), PCI ID 0x9005:0x0285:0x17aa:0x0287 Adaptec 3230S (Harrier), PCI ID 0x9005:0x0285:0x9005:0x0288 Adaptec 3240S (Tornado), PCI ID 0x9005:0x0285:0x9005:0x0289 ASR-2020ZCR SCSI PCI-X ZCR (Skyhawk), PCI ID 0x9005:0x0285:0x9005:0x028a ASR-2025ZCR SCSI SO-DIMM PCI-X ZCR (Terminator), PCI ID 0x9005:0x0285:0x9005:0x028b ASR-2230S + ASR-2230SLP PCI-X (Lancer), PCI ID 0x9005:0x0286:0x9005:0x028c ASR-2130S (Lancer), PCI ID 0x9005:0x0286:0x9005:0x028d AAR-2820SA (Intruder), PCI ID 0x9005:0x0286:0x9005:0x029b AAR-2620SA (Intruder), PCI ID 0x9005:0x0286:0x9005:0x029c AAR-2420SA (Intruder), PCI ID 0x9005:0x0286:0x9005:0x029d ICP9024RO (Lancer), PCI ID 0x9005:0x0286:0x9005:0x029e ICP9014RO (Lancer), PCI ID 0x9005:0x0286:0x9005:0x029f ICP9047MA (Lancer), PCI ID 0x9005:0x0286:0x9005:0x02a0 ICP9087MA (Lancer), PCI ID 0x9005:0x0286:0x9005:0x02a1 ICP5445AU (Hurricane44), PCI ID 0x9005:0x0286:0x9005:0x02a3 ICP9085LI (Marauder-X), PCI ID 0x9005:0x0285:0x9005:0x02a4 ICP5085BR (Marauder-E), PCI ID 0x9005:0x0285:0x9005:0x02a5 ICP9067MA (Intruder-6), PCI ID 0x9005:0x0286:0x9005:0x02a6 Themisto Jupiter Platform, PCI ID 0x9005:0x0287:0x9005:0x0800 Themisto Jupiter Platform, PCI ID 0x9005:0x0200:0x9005:0x0200 Callisto Jupiter Platform, PCI ID 0x9005:0x0286:0x9005:0x0800 ASR-2020SA SATA PCI-X ZCR (Skyhawk), PCI ID 0x9005:0x0285:0x9005:0x028e ASR-2025SA SATA SO-DIMM PCI-X ZCR (Terminator), PCI ID 0x9005:0x0285:0x9005:0x028f AAR-2410SA PCI SATA 4ch (Jaguar II), PCI ID 0x9005:0x0285:0x9005:0x0290 CERC SATA RAID 2 PCI SATA 6ch (DellCorsair), PCI ID 0x9005:0x0285:0x9005:0x0291 AAR-2810SA PCI SATA 8ch (Corsair-8), PCI ID 0x9005:0x0285:0x9005:0x0292 AAR-21610SA PCI SATA 16ch (Corsair-16), PCI ID 0x9005:0x0285:0x9005:0x0293 ESD SO-DIMM PCI-X SATA ZCR (Prowler), PCI ID 0x9005:0x0285:0x9005:0x0294 AAR-2610SA PCI SATA 6ch, PCI ID 0x9005:0x0285:0x103C:0x3227 ASR-2240S (SabreExpress), PCI ID 0x9005:0x0285:0x9005:0x0296 ASR-4005, PCI ID 0x9005:0x0285:0x9005:0x0297 IBM 8i (AvonPark), PCI ID 0x9005:0x0285:0x1014:0x02F2 IBM 8i (AvonPark Lite), PCI ID 0x9005:0x0285:0x1014:0x0312 IBM 8k/8k-l8 (Aurora), PCI ID 0x9005:0x0286:0x1014:0x9580 IBM 8k/8k-l4 (Aurora Lite), PCI ID 0x9005:0x0286:0x1014:0x9540 ASR-4000 (BlackBird), PCI ID 0x9005:0x0285:0x9005:0x0298 ASR-4800SAS (Marauder-X), PCI ID 0x9005:0x0285:0x9005:0x0299 ASR-4805SAS (Marauder-E), PCI ID 0x9005:0x0285:0x9005:0x029a ASR-3800 (Hurricane44), PCI ID 0x9005:0x0286:0x9005:0x02a2 Perc 320/DC, PCI ID 0x9005:0x0285:0x1028:0x0287 Adaptec 5400S (Mustang), PCI ID 0x1011:0x0046:0x9005:0x0365 Adaptec 5400S (Mustang), PCI ID 0x1011:0x0046:0x9005:0x0364 Dell PERC2/QC, PCI ID 0x1011:0x0046:0x9005:0x1364 HP NetRAID-4M, PCI ID 0x1011:0x0046:0x103c:0x10c2 Dell Catchall, PCI ID 0x9005:0x0285:0x1028 Legend Catchall, PCI ID 0x9005:0x0285:0x17aa Adaptec Catch All, PCI ID 0x9005:0x0285 Adaptec Rocket Catch All, PCI ID 0x9005:0x0286 Adaptec NEMER/ARK Catch All, PCI ID 0x9005:0x0288 The following adapters from the mpt2sas driver have been deprecated: SAS2004, PCI ID 0x1000:0x0070 SAS2008, PCI ID 0x1000:0x0072 SAS2108_1, PCI ID 0x1000:0x0074 SAS2108_2, PCI ID 0x1000:0x0076 SAS2108_3, PCI ID 0x1000:0x0077 SAS2116_1, PCI ID 0x1000:0x0064 SAS2116_2, PCI ID 0x1000:0x0065 SSS6200, PCI ID 0x1000:0x007E The following adapters from the megaraid_sas driver have been deprecated: Dell PERC5, PCI ID 0x1028:0x15 SAS1078R, PCI ID 0x1000:0x60 SAS1078DE, PCI ID 0x1000:0x7C SAS1064R, PCI ID 0x1000:0x411 VERDE_ZCR, PCI ID 0x1000:0x413 SAS1078GEN2, PCI ID 0x1000:0x78 SAS0079GEN2, PCI ID 0x1000:0x79 SAS0073SKINNY, PCI ID 0x1000:0x73 SAS0071SKINNY, PCI ID 0x1000:0x71 The following adapters from the qla2xxx driver have been deprecated: ISP24xx, PCI ID 0x1077:0x2422 ISP24xx, PCI ID 0x1077:0x2432 ISP2422, PCI ID 0x1077:0x5422 QLE220, PCI ID 0x1077:0x5432 QLE81xx, PCI ID 0x1077:0x8001 QLE10000, PCI ID 0x1077:0xF000 QLE84xx, PCI ID 0x1077:0x8044 QLE8000, PCI ID 0x1077:0x8432 QLE82xx, PCI ID 0x1077:0x8021 The following adapters from the qla4xxx driver have been deprecated: QLOGIC_ISP8022, PCI ID 0x1077:0x8022 QLOGIC_ISP8324, PCI ID 0x1077:0x8032 QLOGIC_ISP8042, PCI ID 0x1077:0x8042 The following adapters from the be2iscsi driver have been deprecated: BladeEngine 2 (BE2) Devices BladeEngine2 10Gb iSCSI Initiator (generic), PCI ID 0x19a2:0x212 OneConnect OCe10101, OCm10101, OCe10102, OCm10102 BE2 adapter family, PCI ID 0x19a2:0x702 OCe10100 BE2 adapter family, PCI ID 0x19a2:0x703 BladeEngine 3 (BE3) Devices OneConnect TOMCAT iSCSI, PCI ID 0x19a2:0x0712 BladeEngine3 iSCSI, PCI ID 0x19a2:0x0222 The following Ethernet adapters controlled by the be2net driver have been deprecated: BladeEngine 2 (BE2) Devices OneConnect TIGERSHARK NIC, PCI ID 0x19a2:0x0700 BladeEngine2 Network Adapter, PCI ID 0x19a2:0x0211 BladeEngine 3 (BE3) Devices OneConnect TOMCAT NIC, PCI ID 0x19a2:0x0710 BladeEngine3 Network Adapter, PCI ID 0x19a2:0x0221 The following adapters from the lpfc driver have been deprecated: BladeEngine 2 (BE2) Devices OneConnect TIGERSHARK FCoE, PCI ID 0x19a2:0x0704 BladeEngine 3 (BE3) Devices OneConnect TOMCAT FCoE, PCI ID 0x19a2:0x0714 Fibre Channel (FC) Devices FIREFLY, PCI ID 0x10df:0x1ae5 PROTEUS_VF, PCI ID 0x10df:0xe100 BALIUS, PCI ID 0x10df:0xe131 PROTEUS_PF, PCI ID 0x10df:0xe180 RFLY, PCI ID 0x10df:0xf095 PFLY, PCI ID 0x10df:0xf098 LP101, PCI ID 0x10df:0xf0a1 TFLY, PCI ID 0x10df:0xf0a5 BSMB, PCI ID 0x10df:0xf0d1 BMID, PCI ID 0x10df:0xf0d5 ZSMB, PCI ID 0x10df:0xf0e1 ZMID, PCI ID 0x10df:0xf0e5 NEPTUNE, PCI ID 0x10df:0xf0f5 NEPTUNE_SCSP, PCI ID 0x10df:0xf0f6 NEPTUNE_DCSP, PCI ID 0x10df:0xf0f7 FALCON, PCI ID 0x10df:0xf180 SUPERFLY, PCI ID 0x10df:0xf700 DRAGONFLY, PCI ID 0x10df:0xf800 CENTAUR, PCI ID 0x10df:0xf900 PEGASUS, PCI ID 0x10df:0xf980 THOR, PCI ID 0x10df:0xfa00 VIPER, PCI ID 0x10df:0xfb00 LP10000S, PCI ID 0x10df:0xfc00 LP11000S, PCI ID 0x10df:0xfc10 LPE11000S, PCI ID 0x10df:0xfc20 PROTEUS_S, PCI ID 0x10df:0xfc50 HELIOS, PCI ID 0x10df:0xfd00 HELIOS_SCSP, PCI ID 0x10df:0xfd11 HELIOS_DCSP, PCI ID 0x10df:0xfd12 ZEPHYR, PCI ID 0x10df:0xfe00 HORNET, PCI ID 0x10df:0xfe05 ZEPHYR_SCSP, PCI ID 0x10df:0xfe11 ZEPHYR_DCSP, PCI ID 0x10df:0xfe12 Lancer FCoE CNA Devices OCe15104-FM, PCI ID 0x10df:0xe260 OCe15102-FM, PCI ID 0x10df:0xe260 OCm15108-F-P, PCI ID 0x10df:0xe260
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/chap-red_hat_enterprise_linux-7.6_release_notes-deprecated_adapters
Chapter 5. Using the client registration service
Chapter 5. Using the client registration service In order for an application or service to utilize Red Hat build of Keycloak it has to register a client in Red Hat build of Keycloak. An admin can do this through the admin console (or admin REST endpoints), but clients can also register themselves through the Red Hat build of Keycloak client registration service. The Client Registration Service provides built-in support for Red Hat build of Keycloak Client Representations, OpenID Connect Client Meta Data and SAML Entity Descriptors. The Client Registration Service endpoint is /realms/<realm>/clients-registrations/<provider> . The built-in supported providers are: default - Red Hat build of Keycloak Client Representation (JSON) install - Red Hat build of Keycloak Adapter Configuration (JSON) openid-connect - OpenID Connect Client Metadata Description (JSON) saml2-entity-descriptor - SAML Entity Descriptor (XML) The following sections will describe how to use the different providers. 5.1. Authentication To invoke the Client Registration Services you usually need a token. The token can be a bearer token, an initial access token or a registration access token. There is an alternative to register new client without any token as well, but then you need to configure Client Registration Policies (see below). 5.1.1. Bearer token The bearer token can be issued on behalf of a user or a Service Account. The following permissions are required to invoke the endpoints (see Server Administration Guide for more details): create-client or manage-client - To create clients view-client or manage-client - To view clients manage-client - To update or delete client If you are using a bearer token to create clients it's recommend to use a token from a Service Account with only the create-client role (see Server Administration Guide for more details). 5.1.2. Initial Access Token The recommended approach to registering new clients is by using initial access tokens. An initial access token can only be used to create clients and has a configurable expiration as well as a configurable limit on how many clients can be created. An initial access token can be created through the admin console. To create a new initial access token first select the realm in the admin console, then click on Client in the menu on the left, followed by Initial access token in the tabs displayed in the page. You will now be able to see any existing initial access tokens. If you have access you can delete tokens that are no longer required. You can only retrieve the value of the token when you are creating it. To create a new token click on Create . You can now optionally add how long the token should be valid, also how many clients can be created using the token. After you click on Save the token value is displayed. It is important that you copy/paste this token now as you won't be able to retrieve it later. If you forget to copy/paste it, then delete the token and create another one. The token value is used as a standard bearer token when invoking the Client Registration Services, by adding it to the Authorization header in the request. For example: 5.1.3. Registration Access Token When you create a client through the Client Registration Service the response will include a registration access token. The registration access token provides access to retrieve the client configuration later, but also to update or delete the client. The registration access token is included with the request in the same way as a bearer token or initial access token. By default, registration access token rotation is enabled. This means a registration access token is only valid once. When the token is used, the response will include a new token. Note that registration access token rotation can be disabled by using Client Policies . If a client was created outside of the Client Registration Service it won't have a registration access token associated with it. You can create one through the admin console. This can also be useful if you lose the token for a particular client. To create a new token find the client in the admin console and click on Credentials . Then click on Generate registration access token . 5.2. Red Hat build of Keycloak Representations The default client registration provider can be used to create, retrieve, update and delete a client. It uses Red Hat build of Keycloak Client Representation format which provides support for configuring clients exactly as they can be configured through the admin console, including for example configuring protocol mappers. To create a client create a Client Representation (JSON) then perform an HTTP POST request to /realms/<realm>/clients-registrations/default . It will return a Client Representation that also includes the registration access token. You should save the registration access token somewhere if you want to retrieve the config, update or delete the client later. To retrieve the Client Representation perform an HTTP GET request to /realms/<realm>/clients-registrations/default/<client id> . It will also return a new registration access token. To update the Client Representation perform an HTTP PUT request with the updated Client Representation to: /realms/<realm>/clients-registrations/default/<client id> . It will also return a new registration access token. To delete the Client Representation perform an HTTP DELETE request to: /realms/<realm>/clients-registrations/default/<client id> 5.3. Red Hat build of Keycloak adapter configuration The installation client registration provider can be used to retrieve the adapter configuration for a client. In addition to token authentication you can also authenticate with client credentials using HTTP basic authentication. To do this include the following header in the request: To retrieve the Adapter Configuration then perform an HTTP GET request to /realms/<realm>/clients-registrations/install/<client id> . No authentication is required for public clients. This means that for the JavaScript adapter you can load the client configuration directly from Red Hat build of Keycloak using the above URL. 5.4. OpenID Connect Dynamic Client Registration Red Hat build of Keycloak implements OpenID Connect Dynamic Client Registration , which extends OAuth 2.0 Dynamic Client Registration Protocol and OAuth 2.0 Dynamic Client Registration Management Protocol . The endpoint to use these specifications to register clients in Red Hat build of Keycloak is /realms/<realm>/clients-registrations/openid-connect[/<client id>] . This endpoint can also be found in the OpenID Connect Discovery endpoint for the realm, /realms/<realm>/.well-known/openid-configuration . 5.5. SAML Entity Descriptors The SAML Entity Descriptor endpoint only supports using SAML v2 Entity Descriptors to create clients. It doesn't support retrieving, updating or deleting clients. For those operations the Red Hat build of Keycloak representation endpoints should be used. When creating a client a Red Hat build of Keycloak Client Representation is returned with details about the created client, including a registration access token. To create a client perform an HTTP POST request with the SAML Entity Descriptor to /realms/<realm>/clients-registrations/saml2-entity-descriptor . 5.6. Example using CURL The following example creates a client with the clientId myclient using CURL. You need to replace eyJhbGciOiJSUz... with a proper initial access token or bearer token. curl -X POST \ -d '{ "clientId": "myclient" }' \ -H "Content-Type:application/json" \ -H "Authorization: bearer eyJhbGciOiJSUz..." \ http://localhost:8080/realms/master/clients-registrations/default 5.7. Example using Java Client Registration API The Client Registration Java API makes it easy to use the Client Registration Service using Java. To use include the dependency org.keycloak:keycloak-client-registration-api:>VERSION< from Maven. For full instructions on using the Client Registration refer to the JavaDocs. Below is an example of creating a client. You need to replace eyJhbGciOiJSUz... with a proper initial access token or bearer token. String token = "eyJhbGciOiJSUz..."; ClientRepresentation client = new ClientRepresentation(); client.setClientId(CLIENT_ID); ClientRegistration reg = ClientRegistration.create() .url("http://localhost:8080", "myrealm") .build(); reg.auth(Auth.token(token)); client = reg.create(client); String registrationAccessToken = client.getRegistrationAccessToken(); 5.8. Client Registration Policies Note The current plans are for the Client Registration Policies to be removed in favor of the Client Policies described in the Server Administration Guide . Client Policies are more flexible and support more use cases. Red Hat build of Keycloak currently supports two ways how new clients can be registered through Client Registration Service. Authenticated requests - Request to register new client must contain either Initial Access Token or Bearer Token as mentioned above. Anonymous requests - Request to register new client doesn't need to contain any token at all Anonymous client registration requests are very interesting and powerful feature, however you usually don't want that anyone is able to register new client without any limitations. Hence we have Client Registration Policy SPI , which provide a way to limit who can register new clients and under which conditions. In Red Hat build of Keycloak admin console, you can click to Client Registration tab and then Client Registration Policies sub-tab. Here you will see what policies are configured by default for anonymous requests and what policies are configured for authenticated requests. Note The anonymous requests (requests without any token) are allowed just for creating (registration) of new clients. So when you register new client through anonymous request, the response will contain Registration Access Token, which must be used for Read, Update or Delete request of particular client. However using this Registration Access Token from anonymous registration will be then subject to Anonymous Policy too! This means that for example request for update client also needs to come from Trusted Host if you have Trusted Hosts policy. Also for example it won't be allowed to disable Consent Required when updating client and when Consent Required policy is present etc. Currently we have these policy implementations: Trusted Hosts Policy - You can configure list of trusted hosts and trusted domains. Request to Client Registration Service can be sent just from those hosts or domains. Request sent from some untrusted IP will be rejected. URLs of newly registered client must also use just those trusted hosts or domains. For example it won't be allowed to set Redirect URI of client pointing to some untrusted host. By default, there is not any whitelisted host, so anonymous client registration is de-facto disabled. Consent Required Policy - Newly registered clients will have Consent Allowed switch enabled. So after successful authentication, user will always see consent screen when he needs to approve permissions (client scopes). It means that client won't have access to any personal info or permission of user unless user approves it. Protocol Mappers Policy - Allows to configure list of whitelisted protocol mapper implementations. New client can't be registered or updated if it contains some non-whitelisted protocol mapper. Note that this policy is used for authenticated requests as well, so even for authenticated request there are some limitations which protocol mappers can be used. Client Scope Policy - Allow to whitelist Client Scopes , which can be used with newly registered or updated clients. There are no whitelisted scopes by default; only the client scopes, which are defined as Realm Default Client Scopes are whitelisted by default. Full Scope Policy - Newly registered clients will have Full Scope Allowed switch disabled. This means they won't have any scoped realm roles or client roles of other clients. Max Clients Policy - Rejects registration if current number of clients in the realm is same or bigger than specified limit. It's 200 by default for anonymous registrations. Client Disabled Policy - Newly registered client will be disabled. This means that admin needs to manually approve and enable all newly registered clients. This policy is not used by default even for anonymous registration.
[ "Authorization: bearer eyJhbGciOiJSUz", "Authorization: basic BASE64(client-id + ':' + client-secret)", "curl -X POST -d '{ \"clientId\": \"myclient\" }' -H \"Content-Type:application/json\" -H \"Authorization: bearer eyJhbGciOiJSUz...\" http://localhost:8080/realms/master/clients-registrations/default", "String token = \"eyJhbGciOiJSUz...\"; ClientRepresentation client = new ClientRepresentation(); client.setClientId(CLIENT_ID); ClientRegistration reg = ClientRegistration.create() .url(\"http://localhost:8080\", \"myrealm\") .build(); reg.auth(Auth.token(token)); client = reg.create(client); String registrationAccessToken = client.getRegistrationAccessToken();" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/securing_applications_and_services_guide/client_registration
Chapter 6. Installing a cluster on GCP in a restricted network
Chapter 6. Installing a cluster on GCP in a restricted network In OpenShift Container Platform 4.17, you can install a cluster on Google Cloud Platform (GCP) in a restricted network by creating an internal mirror of the installation release content on an existing Google Virtual Private Cloud (VPC). Important You can install an OpenShift Container Platform cluster by using mirrored installation release content, but your cluster will require internet access to use the GCP APIs. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in GCP. While installing a cluster in a restricted network that uses installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere If you use a firewall, you configured it to allow the sites that your cluster requires access to. While you might need to grant access to more sites, you must grant access to *.googleapis.com and accounts.google.com . 6.2. About installations in restricted networks In OpenShift Container Platform 4.17, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 6.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.17, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 6.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the network and subnets for the VPC to install the cluster in under the parent platform.gcp field: network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet> For platform.gcp.network , specify the name for the existing Google VPC. For platform.gcp.controlPlaneSubnet and platform.gcp.computeSubnet , specify the existing subnets to deploy the control plane machines and compute machines, respectively. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 6.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.5.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 6.1. Machine series A2 A3 C2 C2D C3 C3D E2 M1 N1 N2 N2D N4 Tau T2D 6.5.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 6.2. Machine series for 64-bit ARM machines Tau T2A 6.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 6.5.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 6.5.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 6.5.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 additionalTrustBundle: | 27 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 28 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 15 17 18 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 21 Specify the name of an existing VPC. 22 Specify the name of the existing subnet to deploy the control plane machines to. The subnet must belong to the VPC that you specified. 23 Specify the name of the existing subnet to deploy the compute machines to. The subnet must belong to the VPC that you specified. 24 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 25 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 26 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 27 Provide the contents of the certificate file that you used for your mirror registry. 28 Provide the imageContentSources section from the output of the command to mirror the repository. 6.5.8. Create an Ingress Controller with global access on GCP You can create an Ingress Controller that has global access to a Google Cloud Platform (GCP) cluster. Global access is only available to Ingress Controllers using internal load balancers. Prerequisites You created the install-config.yaml and complete any modifications to it. Procedure Create an Ingress Controller with global access on a new GCP cluster. Change to the directory that contains the installation program and create a manifest file: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: Sample clientAccess configuration to Global apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService 1 Set gcp.clientAccess to Global . 2 Global access is only available to Ingress Controllers using internal load balancers. 6.5.9. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.6. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.17. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.17 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.17 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 6.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 6.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 6.7.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 6.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the installation program uses: The IAM Workload Identity Pool Admin role. The following granular permissions: Example 6.4. Required GCP permissions compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.7.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 6.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 6.5. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 6.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.10. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 6.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.17, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.12. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager in disconnected environments . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "./openshift-install create install-config --dir <installation_directory> 1", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "network: <existing_vpc> controlPlaneSubnet: <control_plane_subnet> computeSubnet: <compute_subnet>", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name network: existing_vpc 21 controlPlaneSubnet: control_plane_subnet 22 computeSubnet: compute_subnet 23 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 24 fips: false 25 sshKey: ssh-ed25519 AAAA... 26 additionalTrustBundle: | 27 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 28 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal 2 type: LoadBalancerService", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_gcp/installing-restricted-networks-gcp-installer-provisioned