title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Chapter 8. Reinstalling an Existing Host as a Self-Hosted Engine Node
Chapter 8. Reinstalling an Existing Host as a Self-Hosted Engine Node You can convert an existing, standard host in a self-hosted engine environment to a self-hosted engine node capable of hosting the Manager virtual machine. Procedure Click Compute Hosts and select the host. Click Management Maintenance and click OK . Click Installation Reinstall . Click the Hosted Engine tab and select DEPLOY from the drop-down list. Click OK . The host is reinstalled with self-hosted engine configuration, and is flagged with a crown icon in the Administration Portal. After reinstalling the hosts as self-hosted engine nodes, you can check the status of the new environment by running the following command on one of the nodes: If the new environment is running without issue, you can decommission the original Manager machine.
[ "hosted-engine --vm-status" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/migrating_from_a_standalone_manager_to_a_self-hosted_engine/reinstalling_an_existing_host_as_a_self-hosted_engine_node_migrating_to_she
Configuring InfiniBand and RDMA networks
Configuring InfiniBand and RDMA networks Red Hat Enterprise Linux 8 Configuring and managing high-speed network protocols and RDMA hardware Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_infiniband_and_rdma_networks/index
4.7. Using OpenSSL
4.7. Using OpenSSL OpenSSL is a library that provides cryptographic protocols to applications. The openssl command line utility enables using the cryptographic functions from the shell. It includes an interactive mode. The openssl command line utility has a number of pseudo-commands to provide information on the commands that the version of openssl installed on the system supports. The pseudo-commands list-standard-commands , list-message-digest-commands , and list-cipher-commands output a list of all standard commands, message digest commands, or cipher commands, respectively, that are available in the present openssl utility. The pseudo-commands list-cipher-algorithms and list-message-digest-algorithms list all cipher and message digest names. The pseudo-command list-public-key-algorithms lists all supported public key algorithms. For example, to list the supported public key algorithms, issue the following command: The pseudo-command no- command-name tests whether a command-name of the specified name is available. Intended for use in shell scripts. See man openssl (1) for more information. 4.7.1. Creating and Managing Encryption Keys With OpenSSL , public keys are derived from the corresponding private key. Therefore the first step, once having decided on the algorithm, is to generate the private key. In these examples the private key is referred to as privkey.pem . For example, to create an RSA private key using default parameters, issue the following command: The RSA algorithm supports the following options: rsa_keygen_bits:numbits - The number of bits in the generated key. If not specified 1024 is used. rsa_keygen_pubexp:value - The RSA public exponent value. This can be a large decimal value, or a hexadecimal value if preceded by 0x . The default value is 65537 . For example, to create a 2048 bit RSA private key using 3 as the public exponent, issue the following command: To encrypt the private key as it is output using 128 bit AES and the passphrase " hello " , issue the following command: See man genpkey (1) for more information on generating private keys. 4.7.2. Generating Certificates To generate a certificate using OpenSSL , it is necessary to have a private key available. In these examples the private key is referred to as privkey.pem . If you have not yet generated a private key, see Section 4.7.1, "Creating and Managing Encryption Keys" To have a certificate signed by a certificate authority ( CA ), it is necessary to generate a certificate and then send it to a CA for signing. This is referred to as a certificate signing request. See Section 4.7.2.1, "Creating a Certificate Signing Request" for more information. The alternative is to create a self-signed certificate. See Section 4.7.2.2, "Creating a Self-signed Certificate" for more information. 4.7.2.1. Creating a Certificate Signing Request To create a certificate for submission to a CA, issue a command in the following format: This will create an X.509 certificate called cert.csr encoded in the default privacy-enhanced electronic mail ( PEM ) format. The name PEM is derived from " Privacy Enhancement for Internet Electronic Mail " described in RFC 1424 . To generate a certificate file in the alternative DER format, use the -outform DER command option. After issuing the above command, you will be prompted for information about you and the organization in order to create a distinguished name ( DN ) for the certificate. You will need the following information: The two letter country code for your country The full name of your state or province City or Town The name of your organization The name of the unit within your organization Your name or the host name of the system Your email address The req (1) man page describes the PKCS# 10 certificate request and generating utility. Default settings used in the certificate creating process are contained within the /etc/pki/tls/openssl.cnf file. See man openssl.cnf(5) for more information. 4.7.2.2. Creating a Self-signed Certificate To generate a self-signed certificate, valid for 366 days, issue a command in the following format: 4.7.2.3. Creating a Certificate Using a Makefile The /etc/pki/tls/certs/ directory contains a Makefile which can be used to create certificates using the make command. To view the usage instructions, issue a command as follows: Alternatively, change to the directory and issue the make command as follows: See the make (1) man page for more information. 4.7.3. Verifying Certificates A certificate signed by a CA is referred to as a trusted certificate. A self-signed certificate is therefore an untrusted certificate. The verify utility uses the same SSL and S/MIME functions to verify a certificate as is used by OpenSSL in normal operation. If an error is found it is reported and then an attempt is made to continue testing in order to report any other errors. To verify multiple individual X.509 certificates in PEM format, issue a command in the following format: To verify a certificate chain the leaf certificate must be in cert.pem and the intermediate certificates which you do not trust must be directly concatenated in untrusted.pem . The trusted root CA certificate must be either among the default CA listed in /etc/pki/tls/certs/ca-bundle.crt or in a cacert.pem file. Then, to verify the chain, issue a command in the following format: See man verify (1) for more information. Important Verification of signatures using the MD5 hash algorithm is disabled in Red Hat Enterprise Linux 7 due to insufficient strength of this algorithm. Always use strong algorithms such as SHA256. 4.7.4. Encrypting and Decrypting a File For encrypting (and decrypting) files with OpenSSL , either the pkeyutl or enc built-in commands can be used. With pkeyutl , RSA keys are used to perform the encrypting and decrypting, whereas with enc , symmetric algorithms are used. Using RSA Keys To encrypt a file called plaintext , issue a command as follows: The default format for keys and certificates is PEM. If required, use the -keyform DER option to specify the DER key format. To specify a cryptographic engine, use the -engine option as follows: Where id is the ID of the cryptographic engine. To check the availability of an engine, issue the following command: To sign a data file called plaintext , issue a command as follows: To verify a signed data file and to extract the data, issue a command as follows: To verify the signature, for example using a DSA key, issue a command as follows: The pkeyutl (1) manual page describes the public key algorithm utility. Using Symmetric Algorithms To list available symmetric encryption algorithms, execute the enc command with an unsupported option, such as -l : To specify an algorithm, use its name as an option. For example, to use the aes-128-cbc algorithm, use the following syntax: openssl enc -aes-128-cbc To encrypt a file called plaintext using the aes-128-cbc algorithm, enter the following command: To decrypt the file obtained in the example, use the -d option as in the following example: Important The enc command does not properly support AEAD ciphers, and the ecb mode is not considered secure. For best results, do not use other modes than cbc , cfb , ofb , or ctr . 4.7.5. Generating Message Digests The dgst command produces the message digest of a supplied file or files in hexadecimal form. The command can also be used for digital signing and verification. The message digest command takes the following form: openssl dgst algorithm -out filename -sign private-key Where algorithm is one of md5|md4|md2|sha1|sha|mdc2|ripemd160|dss1 . At time of writing, the SHA1 algorithm is preferred. If you need to sign or verify using DSA, then the dss1 option must be used together with a file containing random data specified by the -rand option. To produce a message digest in the default Hex format using the sha1 algorithm, issue the following command: To digitally sign the digest, using a private key privekey.pem , issue the following command: See man dgst (1) for more information. 4.7.6. Generating Password Hashes The passwd command computes the hash of a password. To compute the hash of a password on the command line, issue a command as follows: The -crypt algorithm is used by default. To compute the hash of a password from standard input, using the MD5 based BSD algorithm 1 , issue a command as follows: The -apr1 option specifies the Apache variant of the BSD algorithm. Note Use the openssl passwd -1 password command only with FIPS mode disabled. Otherwise, the command does not work. To compute the hash of a password stored in a file, and using a salt xx , issue a command as follows: The password is sent to standard output and there is no -out option to specify an output file. The -table will generate a table of password hashes with their corresponding clear text password. See man sslpasswd (1) for more information and examples. 4.7.7. Generating Random Data To generate a file containing random data, using a seed file, issue the following command: Multiple files for seeding the random data process can be specified using the colon, : , as a list separator. See man rand (1) for more information. 4.7.8. Benchmarking Your System To test the computational speed of a system for a given algorithm, issue a command in the following format: where algorithm is one of the supported algorithms you intended to use. To list the available algorithms, type openssl speed and then press tab. 4.7.9. Configuring OpenSSL OpenSSL has a configuration file /etc/pki/tls/openssl.cnf , referred to as the master configuration file, which is read by the OpenSSL library. It is also possible to have individual configuration files for each application. The configuration file contains a number of sections with section names as follows: [ section_name ] . Note the first part of the file, up until the first [ section_name ] , is referred to as the default section. When OpenSSL is searching for names in the configuration file the named sections are searched first. All OpenSSL commands use the master OpenSSL configuration file unless an option is used in the command to specify an alternative configuration file. The configuration file is explained in detail in the config(5) man page. Two RFCs explain the contents of a certificate file. They are: Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile Updates to the Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile
[ "~]USD openssl list-public-key-algorithms", "~]USD openssl genpkey -algorithm RSA -out privkey.pem", "~]USD openssl genpkey -algorithm RSA -out privkey.pem -pkeyopt rsa_keygen_bits:2048 \\ -pkeyopt rsa_keygen_pubexp:3", "~]USD openssl genpkey -algorithm RSA -out privkey.pem -aes-128-cbc -pass pass:hello", "~]USD openssl req -new -key privkey.pem -out cert.csr", "~]USD openssl req -new -x509 -key privkey.pem -out selfcert.pem -days 366", "~]USD make -f /etc/pki/tls/certs/Makefile", "~]USD cd /etc/pki/tls/certs/ ~]USD make", "~]USD openssl verify cert1.pem cert2.pem", "~]USD openssl verify -untrusted untrusted.pem -CAfile cacert.pem cert.pem", "~]USD openssl pkeyutl -in plaintext -out cyphertext -inkey privkey.pem", "~]USD openssl pkeyutl -in plaintext -out cyphertext -inkey privkey.pem -engine id", "~]USD openssl engine -t", "~]USD openssl pkeyutl -sign -in plaintext -out sigtext -inkey privkey.pem", "~]USD openssl pkeyutl -verifyrecover -in sig -inkey key.pem", "~]USD openssl pkeyutl -verify -in file -sigfile sig -inkey key.pem", "~]USD openssl enc -l", "~]USD openssl enc -aes-128-cbc -in plaintext -out plaintext.aes-128-cbc", "~]USD openssl enc -aes-128-cbc -d -in plaintext.aes-128-cbc -out plaintext", "~]USD openssl dgst sha1 -out digest-file", "~]USD openssl dgst sha1 -out digest-file -sign privkey.pem", "~]USD openssl passwd password", "~]USD openssl passwd - 1 password", "~]USD openssl passwd -salt xx -in password-file", "~]USD openssl rand -out rand-file -rand seed-file", "~]USD openssl speed algorithm" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-using_openssl
Chapter 7. Senders and receivers
Chapter 7. Senders and receivers The client uses sender and receiver links to represent channels for delivering messages. Senders and receivers are unidirectional, with a source end for the message origin, and a target end for the message destination. Sources and targets often point to queues or topics on a message broker. Sources are also used to represent subscriptions. 7.1. Creating queues and topics on demand Some message servers support on-demand creation of queues and topics. When a sender or receiver is attached, the server uses the sender target address or the receiver source address to create a queue or topic with a name matching the address. The message server typically defaults to creating either a queue (for one-to-one message delivery) or a topic (for one-to-many message delivery). The client can indicate which it prefers by setting the queue or topic capability on the source or target. To select queue or topic semantics, follow these steps: Configure your message server for automatic creation of queues and topics. This is often the default configuration. Set either the queue or topic capability on your sender target or receiver source, as in the examples below. Example: Sending to a queue created on demand SenderOptions senderOptions = new SenderOptions(); senderOptions.targetOptions().capabilities("queue"); Sender sender = connection.openSender(address, senderOptions); Example: Receiving from a topic created on demand ReceiverOptions receiverOptions = new ReceiverOptions(); receiverOptions.sourceOptions().capabilities("topic"); Receiver receiver = connection.openReceiver(address, receiverOptions); 7.2. Creating durable subscriptions A durable subscription is a piece of state on the remote server representing a message receiver. Ordinarily, message receivers are discarded when a client closes. However, because durable subscriptions are persistent, clients can detach from them and then re-attach later. Any messages received while detached are available when the client re-attaches. Durable subscriptions are uniquely identified by combining the client container ID and receiver name to form a subscription ID. These must have stable values so that the subscription can be recovered. To create a durable subscription, follow these steps: Set the connection container ID to a stable value, such as client-1 : ClientOptions clientOptions = new ClientOptions(); clientOptions.id("client-1"); Client client = Client.Create(clientOptions); Configure the receiver to act like a topic subscription. ReceiverOptions receiverOptions = new ReceiverOptions(); receiverOptions.sourceOptions().capabilities("topic"); receiverOptions.sourceOptions().durabilityMode(DurabilityMode.UNSETTLED_STATE); receiverOptions.sourceOptions().expiryPolicy(ExpiryPolicy.NEVER); Receiver receiver = connection.openDurableReceiver(address, "sub-1", receiverOptions); To detach from a subscription, close the receiver. Delete the subscription. receiver.closeAsync(); if you dont close the receiver then the durable subscription will remain. 7.3. Creating shared subscriptions A shared subscription is a piece of state on the remote server representing one or more message receivers. Because it is shared, multiple clients can consume from the same stream of messages. The client configures a shared subscription by setting the shared capability on the receiver source. Shared subscriptions are uniquely identified by combining the client container ID and receiver name to form a subscription ID. These must have stable values so that multiple client processes can locate the same subscription. If the global capability is set in addition to shared , the receiver name alone is used to identify the subscription. To create a shared subscription, follow these steps: Set the connection container ID to a stable value, such as client-1 : ClientOptions clientOptions = new ClientOptions(); clientOptions.id("client-1"); Client client = Client.Create(clientOptions); Configure the receiver for sharing by setting the shared capability: ReceiverOptions receiverOptions = new ReceiverOptions(); receiverOptions.sourceOptions().capabilities("topic", "shared"); receiverOptions.sourceOptions().durabilityMode(DurabilityMode.UNSETTLED_STATE); receiverOptions.sourceOptions().expiryPolicy(ExpiryPolicy.NEVER); Receiver receiver = connection.openDurableReceiver(address, "sub-1", receiverOptions);
[ "SenderOptions senderOptions = new SenderOptions(); senderOptions.targetOptions().capabilities(\"queue\"); Sender sender = connection.openSender(address, senderOptions);", "ReceiverOptions receiverOptions = new ReceiverOptions(); receiverOptions.sourceOptions().capabilities(\"topic\"); Receiver receiver = connection.openReceiver(address, receiverOptions);", "ClientOptions clientOptions = new ClientOptions(); clientOptions.id(\"client-1\"); Client client = Client.Create(clientOptions);", "ReceiverOptions receiverOptions = new ReceiverOptions(); receiverOptions.sourceOptions().capabilities(\"topic\"); receiverOptions.sourceOptions().durabilityMode(DurabilityMode.UNSETTLED_STATE); receiverOptions.sourceOptions().expiryPolicy(ExpiryPolicy.NEVER); Receiver receiver = connection.openDurableReceiver(address, \"sub-1\", receiverOptions);", "receiver.closeAsync();", "ClientOptions clientOptions = new ClientOptions(); clientOptions.id(\"client-1\"); Client client = Client.Create(clientOptions);", "ReceiverOptions receiverOptions = new ReceiverOptions(); receiverOptions.sourceOptions().capabilities(\"topic\", \"shared\"); receiverOptions.sourceOptions().durabilityMode(DurabilityMode.UNSETTLED_STATE); receiverOptions.sourceOptions().expiryPolicy(ExpiryPolicy.NEVER); Receiver receiver = connection.openDurableReceiver(address, \"sub-1\", receiverOptions);" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_protonj2/1.0/html/using_qpid_protonj2/senders_and_receivers
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/amq_spring_boot_starter/3.0/html/using_the_amq_spring_boot_starter/making-open-source-more-inclusive
Chapter 9. Restoring Satellite Server or Capsule Server from a Backup
Chapter 9. Restoring Satellite Server or Capsule Server from a Backup You can restore Satellite Server or Capsule Server from the backup data that you create as part of Chapter 8, Backing Up Satellite Server and Capsule Server . This process outlines how to restore the backup on the same server that generated the backup, and all data covered by the backup is deleted on the target system. If the original system is unavailable, provision a system with the same configuration settings and host name. 9.1. Restoring from a Full Backup Use this procedure to restore Red Hat Satellite or Capsule Server from a full backup. When the restore process completes, all processes are online, and all databases and system configuration revert to the state at the time of the backup. Prerequisites Ensure that you are restoring to the correct instance. The Red Hat Satellite instance must have the same host name, configuration, and be the same minor version (X.Y) as the original system. Ensure that you have an existing target directory. The target directory is read from the configuration files contained within the archive. Ensure that you have enough space to store this data on the base system of Satellite Server or Capsule Server as well as enough space after the restoration to contain all the data in the /etc/ and /var/ directories contained within the backup. To check the space used by a directory, enter the following command: To check for free space, enter the following command: Add the --total option to get a total of the results from more than one directory. Ensure that all SELinux contexts are correct. Enter the following command to restore the correct SELinux contexts: Procedure Choose the appropriate method to install Satellite or Capsule: To install Satellite Server from a connected network, follow the procedures in Installing Satellite Server in a Connected Network Environment . To install Satellite Server from a disconnected network, follow the procedures in Installing Satellite Server in a Disconnected Network Environment . To install a Capsule Server, follow the procedures in Installing Capsule Server . Copy the backup data to Satellite Server's local file system. Use /var/ or /var/tmp/ . Run the restoration script. Where backup_directory is the time-stamped directory or subdirectory containing the backed-up data. The restore process can take a long time to complete, because of the amount of data to copy. Additional Resources For troubleshooting, you can check /var/log/foreman/production.log and /var/log/messages . 9.2. Restoring from Incremental Backups Use this procedure to restore Satellite or Capsule Server from incremental backups. If you have multiple branches of incremental backups, select your full backup and each incremental backup for the branch you want to restore, in chronological order. When the restore process completes, all processes are online, and all databases and system configuration revert to the state at the time of the backup. Procedure Restore the last full backup using the instructions in Section 9.1, "Restoring from a Full Backup" . Remove the full backup data from Satellite Server's local file system, for example, /var/ or /var/tmp/ . Copy the incremental backup data to Satellite Server's local file system, for example, /var/ or /var/tmp/ . Restore the incremental backups in the same sequence that they are made: Additional Resources For troubleshooting, you can check /var/log/foreman/production.log and /var/log/messages . 9.3. Backup and Restore Capsule Server Using a Virtual Machine Snapshot If your Capsule Server is a virtual machine, you can restore it from a snapshot. Creating weekly snapshots to restore from is recommended. In the event of failure, you can install, or configure a new Capsule Server, and then synchronize the database content from Satellite Server. If required, deploy a new Capsule Server, ensuring the host name is the same as before, and then install the Capsule certificates. You may still have them on Satellite Server, the package name ends in -certs.tar, alternately create new ones. Follow the procedures in Installing Capsule Server until you can confirm, in the Satellite web UI, that Capsule Server is connected to Satellite Server. Then use the procedure Section 9.3.1, "Synchronizing an External Capsule" to synchronize from Satellite. 9.3.1. Synchronizing an External Capsule Synchronize an external Capsule with Satellite. Procedure To synchronize an external Capsule, select the relevant organization and location in the Satellite web UI, or choose Any Organization and Any Location . In the Satellite web UI, navigate to Infrastructure > Capsules and click the name of the Capsule to synchronize. On the Overview tab, select Synchronize .
[ "du -sh /var/backup_directory", "df -h /var/backup_directory", "restorecon -Rnv /", "satellite-maintain restore /var/backup_directory", "satellite-maintain restore /var/backup_directory /FIRST_INCREMENTAL satellite-maintain restore /var/backup_directory /SECOND_INCREMENTAL" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/administering_red_hat_satellite/restoring_server_or_smart_proxy_from_a_backup_admin
23.17. Devices
23.17. Devices This set of XML elements are all used to describe devices provided to the guest virtual machine domain. All of the devices below are indicated as children of the main <devices> element. The following virtual devices are supported: virtio-scsi-pci - PCI bus storage device virtio-blk-pci - PCI bus storage device virtio-net-pci - PCI bus network device also known as virtio-net virtio-serial-pci - PCI bus input device virtio-balloon-pci - PCI bus memory balloon device virtio-rng-pci - PCI bus virtual random number generator device Important If a virtio device is created where the number of vectors is set to a value higher than 32, the device behaves as if it was set to a zero value on Red Hat Enterprise Linux 6, but not on Enterprise Linux 7. The resulting vector setting mismatch causes a migration error if the number of vectors on any virtio device on either platform is set to 33 or higher. It is, therefore, not recommended to set the vector value to be greater than 32. All virtio devices with the exception of virtio-balloon-pci and virtio-rng-pci will accept a vector argument. ... <devices> <emulator>/usr/libexec/qemu-kvm</emulator> </devices> ... Figure 23.26. Devices - child elements The contents of the <emulator> element specify the fully qualified path to the device model emulator binary. The capabilities XML specifies the recommended default emulator to use for each particular domain type or architecture combination. 23.17.1. Hard Drives, Floppy Disks, and CD-ROMs This section of the domain XML specifies any device that looks like a disk, including any floppy disk, hard disk, CD-ROM, or paravirtualized driver that is specified in the <disk> element. <disk type='network'> <driver name="qemu" type="raw" io="threads" ioeventfd="on" event_idx="off"/> <source protocol="sheepdog" name="image_name"> <host name="hostname" port="7000"/> </source> <target dev="hdb" bus="ide"/> <boot order='1'/> <transient/> <address type='drive' controller='0' bus='1' unit='0'/> </disk> Figure 23.27. Devices - Hard drives, floppy disks, CD-ROMs Example <disk type='network'> <driver name="qemu" type="raw"/> <source protocol="rbd" name="image_name2"> <host name="hostname" port="7000"/> </source> <target dev="hdd" bus="ide"/> <auth username='myuser'> <secret type='ceph' usage='mypassid'/> </auth> </disk> Figure 23.28. Devices - Hard drives, floppy disks, CD-ROMs Example 2 <disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol="http" name="url_path"> <host name="hostname" port="80"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> Figure 23.29. Devices - Hard drives, floppy disks, CD-ROMs Example 3 <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol="https" name="url_path"> <host name="hostname" port="443"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol="ftp" name="url_path"> <host name="hostname" port="21"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> Figure 23.30. Devices - Hard drives, floppy disks, CD-ROMs Example 4 <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol="ftps" name="url_path"> <host name="hostname" port="990"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol="tftp" name="url_path"> <host name="hostname" port="69"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='block' device='lun'> <driver name='qemu' type='raw'/> <source dev='/dev/sda'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='3' unit='0'/> </disk> Figure 23.31. Devices - Hard drives, floppy disks, CD-ROMs Example 5 <disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/sda'/> <geometry cyls='16383' heads='16' secs='63' trans='lba'/> <blockio logical_block_size='512' physical_block_size='4096'/> <target dev='hda' bus='ide'/> </disk> <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='blk-pool0' volume='blk-pool0-vol0'/> <target dev='hda' bus='ide'/> </disk> <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/2'> <host name='example.com' port='3260'/> </source> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='vda' bus='virtio'/> </disk> Figure 23.32. Devices - Hard drives, floppy disks, CD-ROMs Example 6 <disk type='network' device='lun'> <driver name='qemu' type='raw'/> <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/1'> iqn.2013-07.com.example:iscsi-pool <host name='example.com' port='3260'/> </source> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='sda' bus='scsi'/> </disk> <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='iscsi-pool' volume='unit:0:0:1' mode='host'/> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='vda' bus='virtio'/> </disk> Figure 23.33. Devices - Hard drives, floppy disks, CD-ROMs Example 7 <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='iscsi-pool' volume='unit:0:0:2' mode='direct'/> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='vda' bus='virtio'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/tmp/test.img' startupPolicy='optional'/> <target dev='sdb' bus='scsi'/> <readonly/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' discard='unmap'/> <source file='/var/lib/libvirt/images/discard1.img'/> <target dev='vdb' bus='virtio'/> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </disk> </devices> ... Figure 23.34. Devices - Hard drives, floppy disks, CD-ROMs Example 8 23.17.1.1. Disk element The <disk> element is the main container for describing disks. The attribute type can be used with the <disk> element. The following types are allowed: file block dir network For more information, see the libvirt upstream pages . 23.17.1.2. Source element Represents the disk source. The disk source depends on the disk type attribute, as follows: <file> - The file attribute specifies the fully-qualified path to the file in which the disk is located. <block> - The dev attribute specifies the fully-qualified path to the host device that serves as the disk. <dir> - The dir attribute specifies the fully-qualified path to the directory used as the disk. <network> - The protocol attribute specifies the protocol used to access the requested image. Possible values are: nbd , isci , rbd , sheepdog , and gluster . If the protocol attribute is rbd , sheepdog , or gluster , an additional attribute, name is mandatory. This attribute specifies which volume and image will be used. If the protocol attribute is nbd , the name attribute is optional. If the protocol attribute is isci , the name attribute may include a logical unit number, separated from the target's name with a slash. For example: iqn.2013-07.com.example:iscsi-pool/1. If not specified, the default LUN is zero. <volume> - The underlying disk source is represented by the pool and volume attributes. <pool> - The name of the storage pool (managed by libvirt ) where the disk source resides. <volume> - The name of the storage volume (managed by libvirt ) used as the disk source. The value for the volume attribute is the output from the Name column of a virsh vol-list [pool-name] When the disk type is network , the source may have zero or more host sub-elements used to specify the host physical machines to connect, including: type='dir' and type='network' . For a file disk type which represents a CD-ROM or floppy (the device attribute), it is possible to define the policy for what to do with the disk if the source file is not accessible. This is done by setting the startupPolicy attribute with one of the following values: mandatory causes a failure if missing for any reason. This is the default setting. requisite causes a failure if missing on boot up, drops if missing on migrate, restore, or revert. optional drops if missing at any start attempt. 23.17.1.3. Mirror element This element is present if the hypervisor has started a BlockCopy operation, where the <mirror> location in the attribute file will eventually have the same contents as the source, and with the file format in attribute format (which might differ from the format of the source). If an attribute ready is present, then it is known the disk is ready to pivot; otherwise, the disk is probably still copying. For now, this element only valid in output; it is ignored on input. 23.17.1.4. Target element The <target> element controls the bus or device under which the disk is exposed to the guest virtual machine operating system. The dev attribute indicates the logical device name. The actual device name specified is not guaranteed to map to the device name in the guest virtual machine operating system. The optional bus attribute specifies the type of disk device to emulate; possible values are driver-specific, with typical values being ide , scsi , virtio , kvm , usb or sata . If omitted, the bus type is inferred from the style of the device name. For example, a device named 'sda' will typically be exported using a SCSI bus. The optional attribute tray indicates the tray status of the removable disks (for example, CD-ROM or Floppy disk), where the value can be either open or closed . The default setting is closed . 23.17.1.5. iotune element The optional <iotune> element provides the ability to provide additional per-device I/O tuning, with values that can vary for each device (contrast this to the blkiotune element, which applies globally to the domain). This element has the following optional sub-elements (note that any sub-element not specified or at all or specified with a value of 0 implies no limit): <total_bytes_sec> - The total throughput limit in bytes per second. This element cannot be used with <read_bytes_sec> or <write_bytes_sec> . <read_bytes_sec> - The read throughput limit in bytes per second. <write_bytes_sec> - The write throughput limit in bytes per second. <total_iops_sec> - The total I/O operations per second. This element cannot be used with <read_iops_sec> or <write_iops_sec> . <read_iops_sec> - The read I/O operations per second. <write_iops_sec> - The write I/O operations per second. 23.17.1.6. Driver element The optional <driver> element allows specifying further details related to the hypervisor driver that is used to provide the disk. The following options may be used: If the hypervisor supports multiple back-end drivers, the name attribute selects the primary back-end driver name, while the optional type attribute provides the sub-type. The optional cache attribute controls the cache mechanism. Possible values are: default , none , writethrough , writeback , directsync (similar to writethrough , but it bypasses the host physical machine page cache) and unsafe (host physical machine may cache all disk I/O, and sync requests from guest virtual machines are ignored). The optional error_policy attribute controls how the hypervisor behaves on a disk read or write error. Possible values are stop , report , ignore , and enospace . The default setting of error_policy is report . There is also an optional rerror_policy that controls behavior for read errors only. If no rerror_policy is given, error_policy is used for both read and write errors. If rerror_policy is given, it overrides the error_policy for read errors. Also note that enospace is not a valid policy for read errors, so if error_policy is set to enospace and no rerror_policy is given, the read error default setting, report will be used. The optional io attribute controls specific policies on I/O; kvm guest virtual machines support threads and native . The optional ioeventfd attribute allows users to set domain I/O asynchronous handling for virtio disk devices. The default is determined by the hypervisor. Accepted values are on and off . Enabling this allows the guest virtual machine to be executed while a separate thread handles I/O. Typically, guest virtual machines experiencing high system CPU utilization during I/O will benefit from this. On the other hand, an overloaded host physical machine can increase guest virtual machine I/O latency. However, it is recommended that you do not change the default setting, and allow the hypervisor to determine the setting. Note The ioeventfd attribute is included in the <driver> element of the disk XML section and also the <driver> element of the device XML section. In the former case, it influences the virtIO disk, and in the latter case the SCSI disk. The optional event_idx attribute controls some aspects of device event processing and can be set to either on or off . If set to on , it will reduce the number of interrupts and exits for the guest virtual machine. The default is determined by the hypervisor and the default setting is on . When this behavior is not required, setting off forces the feature off. However, it is highly recommended that you not change the default setting, and allow the hypervisor to dictate the setting. The optional copy_on_read attribute controls whether to copy the read backing file into the image file. The accepted values can be either on or <off> . copy-on-read avoids accessing the same backing file sectors repeatedly, and is useful when the backing file is over a slow network. By default copy-on-read is off . The discard='unmap' can be set to enable discard support. The same line can be replaced with discard='ignore' to disable. discard='ignore' is the default setting. 23.17.1.7. Additional Device Elements The following attributes may be used within the device element: <boot> - Specifies that the disk is bootable. Additional boot values <order> - Determines the order in which devices will be tried during boot sequence. <per-device> Boot elements cannot be used together with general boot elements in the BIOS boot loader section. <encryption> - Specifies how the volume is encrypted. <readonly> - Indicates the device cannot be modified by the guest virtual machine virtual machine. This setting is the default for disks with attribute <device='cdrom'> . <shareable> Indicates the device is expected to be shared between domains (as long as hypervisor and operating system support this). If shareable is used, cache='no' should be used for that device. <transient> - Indicates that changes to the device contents should be reverted automatically when the guest virtual machine exits. With some hypervisors, marking a disk transient prevents the domain from participating in migration or snapshots. <serial> - Specifies the serial number of guest virtual machine's hard drive. For example, <serial> WD-WMAP9A966149 </serial> . <wwn> - Specifies the World Wide Name (WWN) of a virtual hard disk or CD-ROM drive. It must be composed of 16 hexadecimal digits. <vendor> - Specifies the vendor of a virtual hard disk or CD-ROM device. It must not be longer than 8 printable characters. <product> - Specifies the product of a virtual hard disk or CD-ROM device. It must not be longer than 16 printable characters <host> - Supports the following attributes: name - specifies the host name port - specifies the port number transport - specifies the transport type socket - specifies the path to the socket The meaning of this element and the number of the elements depend on the protocol attribute as shown in Additional host attributes based on the protocol Additional host attributes based on the protocol nbd - Specifies a server running nbd-server and may only be used for only one host physical machine. The default port for this protcol is 10809 . rbd - Monitors servers of RBD type and may be used for one or more host physical machines. sheepdog - Specifies one of the sheepdog servers (default is localhost:7000) and can be used with one or none of the host physical machines. gluster - Specifies a server running a glusterd daemon and may be used for only only one host physical machine. The valid values for transport attribute are tcp , rdma or unix . If nothing is specified, tcp is assumed. If transport is unix , the socket attribute specifies path to unix socket. <address> - Ties the disk to a given slot of a controller. The actual <controller> device can often be inferred but it can also be explicitly specified. The type attribute is mandatory, and is typically pci or drive . For a pci controller, additional attributes for bus , slot , and function must be present, as well as optional domain and multifunction . multifunction defaults to off . For a drive controller, additional attributes controller , bus , target , and unit are available, each with a default setting of 0 . auth - Provides the authentication credentials needed to access the source. It includes a mandatory attribute username , which identifies the user name to use during authentication, as well as a sub-element secret with mandatory attribute type . geometry - Provides the ability to override geometry settings. This mostly useful for S390 DASD-disks or older DOS-disks. It can have the following parameters: cyls - Specifies the number of cylinders. heads - Specifies the number of heads. secs - Specifies the number of sectors per track. trans - Specifies the BIOS-Translation-Modes and can have the following values: none , lba or auto . blockio - Allows the block device to be overridden with any of the block device properties listed below: blockio options logical_block_size - Reports to the guest virtual machine operating system and describes the smallest units for disk I/O. physical_block_size - Reports to the guest virtual machine operating system and describes the disk's hardware sector size, which can be relevant for the alignment of disk data. 23.17.2. Device Addresses Many devices have an optional <address> sub-element to describe where the device placed on the virtual bus is presented to the guest virtual machine. If an address (or any optional attribute within an address) is omitted on input, libvirt will generate an appropriate address; but an explicit address is required if more control over layout is required. See below for device examples including an address element. Every address has a mandatory attribute type that describes which bus the device is on. The choice of which address to use for a given device is constrained in part by the device and the architecture of the guest virtual machine. For example, a disk device uses type='disk' , while a console device would use type='pci' on the 32-bit AMD and Intel, or AMD64 and Intel 64, guest virtual machines, or type='spapr-vio' on PowerPC64 pseries guest virtual machines. Each address <type> has additional optional attributes that control where on the bus the device will be placed. The additional attributes are as follows: type='pci' - PCI addresses have the following additional attributes: domain (a 2-byte hex integer, not currently used by KVM) bus (a hex value between 0 and 0xff, inclusive) slot (a hex value between 0x0 and 0x1f, inclusive) function (a value between 0 and 7, inclusive) Also available is the multi-function attribute, which controls turning on the multi-function bit for a particular slot or function in the PCI control register. This multi-function attribute defaults to 'off' , but should be set to 'on' for function 0 of a slot that will have multiple functions used. type='drive' - drive addresses have the following additional attributes: controller - (a 2-digit controller number) bus - (a 2-digit bus number) target - (a 2-digit bus number) unit - (a 2-digit unit number on the bus) type='virtio-serial' - Each virtio-serial address has the following additional attributes: controller - (a 2-digit controller number) bus - (a 2-digit bus number) slot - (a 2-digit slot within the bus) type='ccid' - A CCID address, used for smart-cards, has the following additional attributes: bus - (a 2-digit bus number) slot - (a 2-digit slot within the bus) type='usb' - USB addresses have the following additional attributes: bus - (a hex value between 0 and 0xfff, inclusive) port - (a dotted notation of up to four octets, such as 1.2 or 2.1.3.1) type='spapr-vio' - On PowerPC pseries guest virtual machines, devices can be assigned to the SPAPR-VIO bus. It has a flat 64-bit address space; by convention, devices are generally assigned at a non-zero multiple of 0x1000, but other addresses are valid and permitted by libvirt . The additional reg attribute, which determines the hex value address of the starting register, can be assigned to this attribute. 23.17.3. Controllers Depending on the guest virtual machine architecture, it is possible to assign many virtual devices to a single bus. Under normal circumstances libvirt can automatically infer which controller to use for the bus. However, it may be necessary to provide an explicit <controller> element in the guest virtual machine XML: ... <devices> <controller type='ide' index='0'/> <controller type='virtio-serial' index='0' ports='16' vectors='4'/> <controller type='virtio-serial' index='1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> <controller type='scsi' index='0' model='virtio-scsi' num_queues='8'/> </controller> ... </devices> ... Figure 23.35. Controller Elements Each controller has a mandatory attribute type , which must be one of "ide", "fdc", "scsi", "sata", "usb", "ccid", or "virtio-serial" , and a mandatory attribute index which is the decimal integer describing in which order the bus controller is encountered (for use in controller attributes of address elements). The "virtio-serial" controller has two additional optional attributes, ports and vectors , which control how many devices can be connected through the controller. A <controller type='scsi'> has an optional attribute model , which is one of "auto", "buslogic", "ibmvscsi", "lsilogic", "lsias1068", "virtio-scsi or "vmpvscsi" . The <controller type='scsi'> also has an attribute num_queues which enables multi-queue support for the number of queues specified. In addition, a ioeventfd attribute can be used, which specifies whether the controller should use asynchronous handling on the SCSI disk. Accepted values are "on" and "off". A "usb" controller has an optional attribute model , which is one of "piix3-uhci", "piix4-uhci", "ehci", "ich9-ehci1", "ich9-uhci1", "ich9-uhci2", "ich9-uhci3", "vt82c686b-uhci", "pci-ohci" or "nec-xhci" . Additionally, if the USB bus needs to be explicitly disabled for the guest virtual machine, model='none' may be used. The PowerPC64 "spapr-vio" addresses do not have an associated controller. For controllers that are themselves devices on a PCI or USB bus, an optional sub-element address can specify the exact relationship of the controller to its master bus, with semantics given above. USB companion controllers have an optional sub-element master to specify the exact relationship of the companion to its master controller. A companion controller is on the same bus as its master, so the companion index value should be equal. ... <devices> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0' bus='0' slot='4' function='7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0' bus='0' slot='4' function='0' multifunction='on'/> </controller> ... </devices> ... Figure 23.36. Devices - controllers - USB 23.17.4. Device Leases When using a lock manager, you have the option to record device leases against a guest virtual machine. The lock manager will ensure that the guest virtual machine does not start unless the leases can be acquired. When configured using conventional management tools, the following section of the domain XML is affected: ... <devices> ... <lease> <lockspace>somearea</lockspace> <key>somekey</key> <target path='/some/lease/path' offset='1024'/> </lease> ... </devices> ... Figure 23.37. Devices - device leases The lease section can have the following arguments: lockspace - An arbitrary string that identifies lockspace within which the key is held. Lock managers may impose extra restrictions on the format, or length of the lockspace name. key - An arbitrary string that uniquely identifies the lease to be acquired. Lock managers may impose extra restrictions on the format, or length of the key. target - The fully qualified path of the file associated with the lockspace. The offset specifies where the lease is stored within the file. If the lock manager does not require a offset, set this value to 0 . 23.17.5. Host Physical Machine Device Assignment 23.17.5.1. USB / PCI devices The host physical machine's USB and PCI devices can be passed through to the guest virtual machine using the hostdev element, by modifying the host physical machine using a management tool, configure the following section of the domain XML file: ... <devices> <hostdev mode='subsystem' type='usb'> <source startupPolicy='optional'> <vendor id='0x1234'/> <product id='0xbeef'/> </source> <boot order='2'/> </hostdev> </devices> ... Figure 23.38. Devices - Host physical machine device assignment Alternatively, the following can also be done: ... <devices> <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address bus='0x06' slot='0x02' function='0x0'/> </source> <boot order='1'/> <rom bar='on' file='/etc/fake/boot.bin'/> </hostdev> </devices> ... Figure 23.39. Devices - Host physical machine device assignment alternative Alternatively, the following can also be done: ... <devices> <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host0'/> <address type='scsi' bus='0' target='0' unit='0'/> </source> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </hostdev> </devices> .. Figure 23.40. Devices - host physical machine scsi device assignment The components of this section of the domain XML are as follows: Table 23.16. Host physical machine device assignment elements Parameter Description hostdev This is the main element for describing host physical machine devices. It accepts the following options: mode - the value is always subsystem for USB and PCI devices. type - usb for USB devices and pci for PCI devices. managed - Toggles the Managed mode of the device: When set to yes for a PCI device, it attaches to the guest machine and detaches from the guest machine and re-attaches to the host machine as necessary. managed='yes' is recommended for general use of device assignment. When set to no or omitted for PCI and for USB devices, the device stays attached to the guest. To make the device available to the host, the user must use the argument virNodeDeviceDettach or the virsh nodedev-dettach command before starting the guest or hot plugging the device. In addition, they must use virNodeDeviceReAttach or virsh nodedev-reattach after hot-unplugging the device or stopping the guest. managed='no' is mainly recommended for devices that are intended to be dedicated to a specific guest. source Describes the device as seen from the host physical machine. The USB device can be addressed by vendor or product ID using the vendor and product elements or by the device's address on the host physical machines using the address element. PCI devices on the other hand can only be described by their address. Note that the source element of USB devices may contain a startupPolicy attribute which can be used to define a rule for what to do if the specified host physical machine USB device is not found. The attribute accepts the following values: mandatory - Fails if missing for any reason (the default). requisite - Fails if missing on boot up, drops if missing on migrate/restore/revert. optional - Drops if missing at any start attempt. vendor, product These elements each have an id attribute that specifies the USB vendor and product ID. The IDs can be given in decimal, hexadecimal (starting with 0x) or octal (starting with 0) form. boot Specifies that the device is bootable. The attribute's order determines the order in which devices will be tried during boot sequence. The per-device boot elements cannot be used together with general boot elements in BIOS boot loader section. rom Used to change how a PCI device's ROM is presented to the guest virtual machine. The optional bar attribute can be set to on or off , and determines whether or not the device's ROM will be visible in the guest virtual machine's memory map. (In PCI documentation, the rom bar setting controls the presence of the Base Address Register for the ROM). If no rom bar is specified, the default setting will be used. The optional file attribute is used to point to a binary file to be presented to the guest virtual machine as the device's ROM BIOS. This can be useful for example to provide a PXE boot ROM for a virtual function of an SR-IOV capable ethernet device (which has no boot ROMs for the VFs). address Also has a bus and device attribute to specify the USB bus and device number the device appears at on the host physical machine. The values of these attributes can be given in decimal, hexadecimal (starting with 0x) or octal (starting with 0) form. For PCI devices, the element carries 3 attributes allowing to designate the device as can be found with lspci or with virsh nodedev-list . 23.17.5.2. Block / character devices The host physical machine's block / character devices can be passed through to the guest virtual machine by using management tools to modify the domain XML hostdev element. Note that this is only possible with container-based virtualization. ... <hostdev mode='capabilities' type='storage'> <source> <block>/dev/sdf1</block> </source> </hostdev> ... Figure 23.41. Devices - Host physical machine device assignment block character devices An alternative approach is this: ... <hostdev mode='capabilities' type='misc'> <source> <char>/dev/input/event3</char> </source> </hostdev> ... Figure 23.42. Devices - Host physical machine device assignment block character devices alternative 1 Another alternative approach is this: ... <hostdev mode='capabilities' type='net'> <source> <interface>eth0</interface> </source> </hostdev> ... Figure 23.43. Devices - Host physical machine device assignment block character devices alternative 2 The components of this section of the domain XML are as follows: Table 23.17. Block / character device elements Parameter Description hostdev This is the main container for describing host physical machine devices. For block/character devices, passthrough mode is always capabilities , and type is block for a block device and char for a character device. source This describes the device as seen from the host physical machine. For block devices, the path to the block device in the host physical machine operating system is provided in the nested block element, while for character devices, the char element is used. 23.17.6. Redirected devices USB device redirection through a character device is configured by modifying the following section of the domain XML: ... <devices> <redirdev bus='usb' type='tcp'> <source mode='connect' host='localhost' service='4000'/> <boot order='1'/> </redirdev> <redirfilter> <usbdev class='0x08' vendor='0x1234' product='0xbeef' version='2.00' allow='yes'/> <usbdev allow='no'/> </redirfilter> </devices> ... Figure 23.44. Devices - redirected devices The components of this section of the domain XML are as follows: Table 23.18. Redirected device elements Parameter Description redirdev This is the main container for describing redirected devices. bus must be usb for a USB device. An additional attribute type is required, matching one of the supported serial device types, to describe the host physical machine side of the tunnel: type='tcp' or type='spicevmc' (which uses the usbredir channel of a SPICE graphics device) are typical. The redirdev element has an optional sub-element, address , which can tie the device to a particular controller. Further sub-elements, such as source , may be required according to the given type , although a target sub-element is not required (since the consumer of the character device is the hypervisor itself, rather than a device visible in the guest virtual machine). boot Specifies that the device is bootable. The order attribute determines the order in which devices will be tried during boot sequence. The per-device boot elements cannot be used together with general boot elements in BIOS boot loader section. redirfilter This is used for creating the filter rule to filter out certain devices from redirection. It uses sub-element usbdev to define each filter rule. The class attribute is the USB Class code. 23.17.7. Smartcard Devices A virtual smartcard device can be supplied to the guest virtual machine via the smartcard element. A USB smartcard reader device on the host physical machine cannot be used on a guest virtual machine with device passthrough. This is because it cannot be made available to both the host physical machine and guest virtual machine, and can lock the host physical machine computer when it is removed from the guest virtual machine. Therefore, some hypervisors provide a specialized virtual device that can present a smartcard interface to the guest virtual machine, with several modes for describing how the credentials are obtained from the host physical machine or even a from a channel created to a third-party smartcard provider. Configure USB device redirection through a character device with management tools to modify the following section of the domain XML: ... <devices> <smartcard mode='host'/> <smartcard mode='host-certificates'> <certificate>cert1</certificate> <certificate>cert2</certificate> <certificate>cert3</certificate> <database>/etc/pki/nssdb/</database> </smartcard> <smartcard mode='passthrough' type='tcp'> <source mode='bind' host='127.0.0.1' service='2001'/> <protocol type='raw'/> <address type='ccid' controller='0' slot='0'/> </smartcard> <smartcard mode='passthrough' type='spicevmc'/> </devices> ... Figure 23.45. Devices - smartcard devices The smartcard element has a mandatory attribute mode . In each mode, the guest virtual machine sees a device on its USB bus that behaves like a physical USB CCID (Chip/Smart Card Interface Device) card. The mode attributes are as follows: Table 23.19. Smartcard mode elements Parameter Description mode='host' In this mode, the hypervisor relays all requests from the guest virtual machine into direct access to the host physical machine's smartcard via NSS. No other attributes or sub-elements are required. See below about the use of an optional address sub-element. mode='host-certificates' This mode allows you to provide three NSS certificate names residing in a database on the host physical machine, rather than requiring a smartcard to be plugged into the host physical machine. These certificates can be generated using the command certutil -d /etc/pki/nssdb -x -t CT,CT,CT -S -s CN=cert1 -n cert1, and the resulting three certificate names must be supplied as the content of each of three certificate sub-elements. An additional sub-element database can specify the absolute path to an alternate directory (matching the -d flag of the certutil command when creating the certificates); if not present, it defaults to /etc/pki/nssdb . mode='passthrough' Using this mode allows you to tunnel all requests through a secondary character device to a third-party provider (which may in turn be communicating to a smartcard or using three certificate files, rather than having the hypervisor directly communicate with the host physical machine. In this mode of operation, an additional attribute type is required, matching one of the supported serial device types, to describe the host physical machine side of the tunnel; type='tcp' or type='spicevmc' (which uses the smartcard channel of a SPICE graphics device) are typical. Further sub-elements, such as source , may be required according to the given type, although a target sub-element is not required (since the consumer of the character device is the hypervisor itself, rather than a device visible in the guest virtual machine). Each mode supports an optional sub-element address , which fine-tunes the correlation between the smartcard and a ccid bus controller. For more information, see Section 23.17.2, "Device Addresses" ). 23.17.8. Network Interfaces Modify the network interface devices using management tools to configure the following part of the domain XML: ... <devices> <interface type='direct' trustGuestRxFilters='yes'> <source dev='eth0'/> <mac address='52:54:00:5d:c7:9e'/> <boot order='1'/> <rom bar='off'/> </interface> </devices> ... Figure 23.46. Devices - network interfaces There are several possibilities for configuring the network interface for the guest virtual machine. This is done by setting a value to the interface element's type attribute. The following values may be used: "direct" - Attaches the guest virtual machine's NIC to the physical NIC on the host physical machine. For details and an example, see Section 23.17.8.6, "Direct attachment to physical interfaces" . "network" - This is the recommended configuration for general guest virtual machine connectivity on host physical machines with dynamic or wireless networking configurations. For details and an example, see Section 23.17.8.1, "Virtual networks" . "bridge" - This is the recommended configuration setting for guest virtual machine connectivity on host physical machines with static wired networking configurations. For details and an example, see Section 23.17.8.2, "Bridge to LAN" . "ethernet" - Provides a means for the administrator to execute an arbitrary script to connect the guest virtual machine's network to the LAN. For details and an example, see Section 23.17.8.5, "Generic Ethernet connection" . "hostdev" - Allows a PCI network device to be directly assigned to the guest virtual machine using generic device passthrough. For details and an example, see Section 23.17.8.7, "PCI passthrough" . "mcast" - A multicast group can be used to represent a virtual network. For details and an example, see Section 23.17.8.8, "Multicast tunnel" . "user" - Using the user option sets the user space SLIRP stack parameters provides a virtual LAN with NAT to the outside world. For details and an example, see Section 23.17.8.4, "User space SLIRP stack" . "server" - Using the server option creates a TCP client-server architecture in order to provide a virtual network where one guest virtual machine provides the server end of the network and all other guest virtual machines are configured as clients. For details and an example, see Section 23.17.8.9, "TCP tunnel" . Each of these options has a link to give more details. Additionally, each <interface> element can be defined with an optional <trustGuestRxFilters> attribute which allows host physical machine to detect and trust reports received from the guest virtual machine. These reports are sent each time the interface receives changes to the filter. This includes changes to the primary MAC address, the device address filter, or the vlan configuration. The <trustGuestRxFilters> attribute is disabled by default for security reasons. It should also be noted that support for this attribute depends on the guest network device model as well as on the host physical machine's connection type. Currently, it is only supported for the virtio device models and for macvtap connections on the host physical machine. A simple use case where it is recommended to set the optional parameter <trustGuestRxFilters> is if you want to give your guest virtual machines the permission to control host physical machine side filters, as any filters that are set by the guest will also be mirrored on the host. In addition to the attributes listed above, each <interface> element can take an optional <address> sub-element that can tie the interface to a particular PCI slot, with attribute type='pci' . For more information, see Section 23.17.2, "Device Addresses" . 23.17.8.1. Virtual networks This is the recommended configuration for general guest virtual machine connectivity on host physical machines with dynamic or wireless networking configurations (or multi-host physical machine environments where the host physical machine hardware details, which are described separately in a <network> definition). In addition, it provides a connection with details that are described by the named network definition. Depending on the virtual network's forward mode configuration, the network may be totally isolated (no <forward> element given), using NAT to connect to an explicit network device or to the default route ( forward mode='nat' ), routed with no NAT ( forward mode='route' ), or connected directly to one of the host physical machine's network interfaces (using macvtap) or bridge devices ( forward mode='bridge|private|vepa|passthrough' ) For networks with a forward mode of bridge , private , vepa , and passthrough , it is assumed that the host physical machine has any necessary DNS and DHCP services already set up outside the scope of libvirt. In the case of isolated, nat, and routed networks, DHCP and DNS are provided on the virtual network by libvirt, and the IP range can be determined by examining the virtual network configuration with virsh net-dumpxml [networkname] . The 'default' virtual network, which is set up out of the box, uses NAT to connect to the default route and has an IP range of 192.168.122.0/255.255.255.0. Each guest virtual machine will have an associated tun device created with a name of vnetN, which can also be overridden with the <target> element (refer to Section 23.17.8.11, "Overriding the target element" ). When the source of an interface is a network, a port group can be specified along with the name of the network; one network may have multiple portgroups defined, with each portgroup containing slightly different configuration information for different classes of network connections. Also, similar to <direct> network connections (described below), a connection of type network may specify a <virtualport> element, with configuration data to be forwarded to a 802.1Qbg or 802.1Qbh-compliant Virtual Ethernet Port Aggregator (VEPA)switch, or to an Open vSwitch virtual switch. Since the type of switch is dependent on the configuration setting in the <network> element on the host physical machine, it is acceptable to omit the <virtualport type> attribute. You will need to specify the <virtualport type> either once or many times. When the domain starts up a complete <virtualport> element is constructed by merging together the type and attributes defined. This results in a newly-constructed virtual port. Note that the attributes from lower virtual ports cannot make changes on the attributes defined in higher virtual ports. Interfaces take the highest priority, while port group is lowest priority. For example, to create a properly working network with both an 802.1Qbh switch and an Open vSwitch switch, you may choose to specify no type, but both profileid and an interfaceid must be supplied. The other attributes to be filled in from the virtual port, such as such as managerid , typeid , or profileid , are optional. If you want to limit a guest virtual machine to connecting only to certain types of switches, you can specify the virtualport type, and only switches with the specified port type will connect. You can also further limit switch connectivity by specifying additional parameters. As a result, if the port was specified and the host physical machine's network has a different type of virtualport, the connection of the interface will fail. The virtual network parameters are defined using management tools that modify the following part of the domain XML: ... <devices> <interface type='network'> <source network='default'/> </interface> ... <interface type='network'> <source network='default' portgroup='engineering'/> <target dev='vnet7'/> <mac address="00:11:22:33:44:55"/> <virtualport> <parameters instanceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> </devices> ... Figure 23.47. Devices - network interfaces- virtual networks 23.17.8.2. Bridge to LAN As mentioned in, Section 23.17.8, "Network Interfaces" , this is the recommended configuration setting for guest virtual machine connectivity on host physical machines with static wired networking configurations. Bridge to LAN provides a bridge from the guest virtual machine directly onto the LAN. This assumes there is a bridge device on the host physical machine which has one or more of the host physical machines physical NICs enslaved. The guest virtual machine will have an associated tun device created with a name of <vnetN> , which can also be overridden with the <target> element (refer to Section 23.17.8.11, "Overriding the target element" ). The <tun> device will be enslaved to the bridge. The IP range or network configuration is the same as what is used on the LAN. This provides the guest virtual machine full incoming and outgoing network access, just like a physical machine. On Linux systems, the bridge device is normally a standard Linux host physical machine bridge. On host physical machines that support Open vSwitch, it is also possible to connect to an Open vSwitch bridge device by adding virtualport type='openvswitch'/ to the interface definition. The Open vSwitch type virtualport accepts two parameters in its parameters element: an interfaceid which is a standard UUID used to uniquely identify this particular interface to Open vSwitch (if you do no specify one, a random interfaceid will be generated when first defining the interface), and an optional profileid which is sent to Open vSwitch as the interfaces <port-profile> . To set the bridge to LAN settings, use a management tool that will configure the following part of the domain XML: ... <devices> ... <interface type='bridge'> <source bridge='br0'/> </interface> <interface type='bridge'> <source bridge='br1'/> <target dev='vnet7'/> <mac address="00:11:22:33:44:55"/> </interface> <interface type='bridge'> <source bridge='ovsbr'/> <virtualport type='openvswitch'> <parameters profileid='menial' interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> ... </devices> Figure 23.48. Devices - network interfaces- bridge to LAN 23.17.8.3. Setting a port masquerading range In cases where you want to set the port masquerading range, set the port as follows: <forward mode='nat'> <address start='1.2.3.4' end='1.2.3.10'/> </forward> ... Figure 23.49. Port Masquerading Range These values should be set using the iptables commands as shown in Section 17.3, "Network Address Translation" 23.17.8.4. User space SLIRP stack Setting the user space SLIRP stack parameters provides a virtual LAN with NAT to the outside world. The virtual network has DHCP and DNS services and will give the guest virtual machine an IP addresses starting from 10.0.2.15. The default router is 10.0.2.2 and the DNS server is 10.0.2.3. This networking is the only option for unprivileged users who need their guest virtual machines to have outgoing access. The user space SLIRP stack parameters are defined in the following part of the domain XML: ... <devices> <interface type='user'/> ... <interface type='user'> <mac address="00:11:22:33:44:55"/> </interface> </devices> ... Figure 23.50. Devices - network interfaces- User space SLIRP stack 23.17.8.5. Generic Ethernet connection This provides a means for the administrator to execute an arbitrary script to connect the guest virtual machine's network to the LAN. The guest virtual machine will have a <tun> device created with a name of vnetN , which can also be overridden with the <target> element. After creating the tun device a shell script will be run and complete the required host physical machine network integration. By default, this script is called /etc/qemu-ifup but can be overridden (refer to Section 23.17.8.11, "Overriding the target element" ). The generic ethernet connection parameters are defined in the following part of the domain XML: ... <devices> <interface type='ethernet'/> ... <interface type='ethernet'> <target dev='vnet7'/> <script path='/etc/qemu-ifup-mynet'/> </interface> </devices> ... Figure 23.51. Devices - network interfaces- generic ethernet connection 23.17.8.6. Direct attachment to physical interfaces This directly attaches the guest virtual machine's NIC to the physical interface of the host physical machine, if the physical interface is specified. This requires the Linux macvtap driver to be available. One of the following mode attribute values vepa ( 'Virtual Ethernet Port Aggregator'), bridge or private can be chosen for the operation mode of the macvtap device. vepa is the default mode. Manipulating direct attachment to physical interfaces involves setting the following parameters in this section of the domain XML: ... <devices> ... <interface type='direct'> <source dev='eth0' mode='vepa'/> </interface> </devices> ... Figure 23.52. Devices - network interfaces- direct attachment to physical interfaces The individual modes cause the delivery of packets to behave as shown in Table 23.20, "Direct attachment to physical interface elements" : Table 23.20. Direct attachment to physical interface elements Element Description vepa All of the guest virtual machines' packets are sent to the external bridge. Packets whose destination is a guest virtual machine on the same host physical machine as where the packet originates from are sent back to the host physical machine by the VEPA capable bridge (today's bridges are typically not VEPA capable). bridge Packets whose destination is on the same host physical machine as where they originate from are directly delivered to the target macvtap device. Both origin and destination devices need to be in bridge mode for direct delivery. If either one of them is in vepa mode, a VEPA capable bridge is required. private All packets are sent to the external bridge and will only be delivered to a target virtual machine on the same host physical machine if they are sent through an external router or gateway and that device sends them back to the host physical machine. This procedure is followed if either the source or destination device is in private mode. passthrough This feature attaches a virtual function of a SR-IOV capable NIC directly to a guest virtual machine without losing the migration capability. All packets are sent to the VF/IF of the configured network device. Depending on the capabilities of the device, additional prerequisites or limitations may apply; for example, this requires kernel 2.6.38 or later. The network access of directly attached virtual machines can be managed by the hardware switch to which the physical interface of the host physical machine is connected to. The interface can have additional parameters as shown below, if the switch conforms to the IEEE 802.1Qbg standard. The parameters of the virtualport element are documented in more detail in the IEEE 802.1Qbg standard. The values are network specific and should be provided by the network administrator. In 802.1Qbg terms, the Virtual Station Interface (VSI) represents the virtual interface of a virtual machine. Note that IEEE 802.1Qbg requires a non-zero value for the VLAN ID. Additional elements that can be manipulated are described in Table 23.21, "Direct attachment to physical interface additional elements" : Table 23.21. Direct attachment to physical interface additional elements Element Description managerid The VSI Manager ID identifies the database containing the VSI type and instance definitions. This is an integer value and the value 0 is reserved. typeid The VSI Type ID identifies a VSI type characterizing the network access. VSI types are typically managed by network administrator. This is an integer value. typeidversion The VSI Type Version allows multiple versions of a VSI Type. This is an integer value. instanceid The VSI Instance ID Identifier is generated when a VSI instance (a virtual interface of a virtual machine) is created. This is a globally unique identifier. profileid The profile ID contains the name of the port profile that is to be applied onto this interface. This name is resolved by the port profile database into the network parameters from the port profile, and those network parameters will be applied to this interface. Additional parameters in the domain XML include: ... <devices> ... <interface type='direct'> <source dev='eth0.2' mode='vepa'/> <virtualport type="802.1Qbg"> <parameters managerid="11" typeid="1193047" typeidversion="2" instanceid="09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f"/> </virtualport> </interface> </devices> ... Figure 23.53. Devices - network interfaces- direct attachment to physical interfaces additional parameters The interface can have additional parameters as shown below if the switch conforms to the IEEE 802.1Qbh standard. The values are network specific and should be provided by the network administrator. Additional parameters in the domain XML include: ... <devices> ... <interface type='direct'> <source dev='eth0' mode='private'/> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices> ... Figure 23.54. Devices - network interfaces - direct attachment to physical interfaces more additional parameters The profileid attribute contains the name of the port profile to be applied to this interface. This name is resolved by the port profile database into the network parameters from the port profile, and those network parameters will be applied to this interface. 23.17.8.7. PCI passthrough A PCI network device (specified by the source element) is directly assigned to the guest virtual machine using generic device passthrough, after first optionally setting the device's MAC address to the configured value, and associating the device with an 802.1Qbh capable switch using an optionally specified virtualport element (see the examples of virtualport given above for type='direct' network devices). Note that due to limitations in standard single-port PCI ethernet card driver design, only SR-IOV (Single Root I/O Virtualization) virtual function (VF) devices can be assigned in this manner. To assign a standard single-port PCI or PCIe ethernet card to a guest virtual machine, use the traditional hostdev device definition. Note that this "intelligent passthrough" of network devices is very similar to the functionality of a standard hostdev device, the difference being that this method allows specifying a MAC address and virtualport for the passed-through device. If these capabilities are not required, if you have a standard single-port PCI, PCIe, or USB network card that does not support SR-IOV (and hence would anyway lose the configured MAC address during reset after being assigned to the guest virtual machine domain), or if you are using libvirt version older than 0.9.11, use standard hostdev definition to assign the device to the guest virtual machine instead of interface type='hostdev' . ... <devices> <interface type='hostdev'> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </source> <mac address='52:54:00:6d:90:02'> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices> ... Figure 23.55. Devices - network interfaces- PCI passthrough 23.17.8.8. Multicast tunnel A multicast group can be used to represent a virtual network. Any guest virtual machine with network devices within the same multicast group will communicate with each other, even if they reside across multiple physical host physical machines. This mode may be used as an unprivileged user. There is no default DNS or DHCP support and no outgoing network access. To provide outgoing network access, one of the guest virtual machines should have a second NIC which is connected to one of the first 4 network types in order to provide appropriate routing. The multicast protocol is compatible with protocols used by user mode Linux guest virtual machines as well. Note that the source address used must be from the multicast address block. A multicast tunnel is created by manipulating the interface type using a management tool and setting it to mcast , and providing a mac address and source address , for example: ... <devices> <interface type='mcast'> <mac address='52:54:00:6d:90:01'> <source address='230.0.0.1' port='5558'/> </interface> </devices> ... Figure 23.56. Devices - network interfaces- multicast tunnel 23.17.8.9. TCP tunnel Creating a TCP client-server architecture is another way to provide a virtual network where one guest virtual machine provides the server end of the network and all other guest virtual machines are configured as clients. All network traffic between the guest virtual machines is routed through the guest virtual machine that is configured as the server. This model is also available for use to unprivileged users. There is no default DNS or DHCP support and no outgoing network access. To provide outgoing network access, one of the guest virtual machines should have a second NIC which is connected to one of the first 4 network types thereby providing the appropriate routing. A TCP tunnel is created by manipulating the interface type using a management tool and setting it to mcast , and providing a mac address and source address , for example: ... <devices> <interface type='server'> <mac address='52:54:00:22:c9:42'> <source address='192.168.0.1' port='5558'/> </interface> ... <interface type='client'> <mac address='52:54:00:8b:c9:51'> <source address='192.168.0.1' port='5558'/> </interface> </devices> ... Figure 23.57. Devices - network interfaces- TCP tunnel 23.17.8.10. Setting NIC driver-specific options Some NICs may have tunable driver-specific options. These options are set as attributes of the driver sub-element of the interface definition. These options are set by using management tools to configure the following sections of the domain XML: <devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <model type='virtio'/> <driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'/> </interface> </devices> ... Figure 23.58. Devices - network interfaces- setting NIC driver-specific options The following attributes are available for the "virtio" NIC driver: Table 23.22. virtio NIC driver elements Parameter Description name The optional name attribute forces which type of back-end driver to use. The value can be either kvm (a user-space back-end) or vhost (a kernel back-end, which requires the vhost module to be provided by the kernel); an attempt to require the vhost driver without kernel support will be rejected. The default setting is vhost if the vhost driver is present, but will silently fall back to kvm if not. txmode Specifies how to handle transmission of packets when the transmit buffer is full. The value can be either iothread or timer . If set to iothread , packet tx is all done in an iothread in the bottom half of the driver (this option translates into adding "tx=bh" to the kvm command-line "-device" virtio-net-pci option). If set to timer , tx work is done in KVM, and if there is more tx data than can be sent at the present time, a timer is set before KVM moves on to do other things; when the timer fires, another attempt is made to send more data. It is not recommended to change this value. ioeventfd Sets domain I/O asynchronous handling for the interface device. The default is left to the discretion of the hypervisor. Accepted values are on and off . Enabling this option allows KVM to execute a guest virtual machine while a separate thread handles I/O. Typically, guest virtual machines experiencing high system CPU utilization during I/O will benefit from this. On the other hand, overloading the physical host machine may also increase guest virtual machine I/O latency. It is not recommended to change this value. event_idx The event_idx attribute controls some aspects of device event processing. The value can be either on or off . on is the default, which reduces the number of interrupts and exits for the guest virtual machine. In situations where this behavior is sub-optimal, this attribute provides a way to force the feature off. It is not recommended to change this value. 23.17.8.11. Overriding the target element To override the target element, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> </interface> </devices> ... Figure 23.59. Devices - network interfaces- overriding the target element If no target is specified, certain hypervisors will automatically generate a name for the created tun device. This name can be manually specified, however the name must not start with either vnet or vif , which are prefixes reserved by libvirt and certain hypervisors. Manually-specified targets using these prefixes will be ignored. 23.17.8.12. Specifying boot order To specify the boot order, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <boot order='1'/> </interface> </devices> ... Figure 23.60. Specifying boot order In hypervisors which support it, you can set a specific NIC to be used for the network boot. The order of attributes determine the order in which devices will be tried during boot sequence. Note that the per-device boot elements cannot be used together with general boot elements in BIOS boot loader section. 23.17.8.13. Interface ROM BIOS configuration To specify the ROM BIOS configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <rom bar='on' file='/etc/fake/boot.bin'/> </interface> </devices> ... Figure 23.61. Interface ROM BIOS configuration For hypervisors that support it, you can change how a PCI Network device's ROM is presented to the guest virtual machine. The bar attribute can be set to on or off , and determines whether or not the device's ROM will be visible in the guest virtual machine's memory map. (In PCI documentation, the rom bar setting controls the presence of the Base Address Register for the ROM). If no rom bar is specified, the KVM default will be used (older versions of KVM used off for the default, while newer KVM hypervisors default to on ). The optional file attribute is used to point to a binary file to be presented to the guest virtual machine as the device's ROM BIOS. This can be useful to provide an alternative boot ROM for a network device. 23.17.8.14. Quality of service (QoS) Incoming and outgoing traffic can be shaped independently to set Quality of Service (QoS). The bandwidth element can have at most one inbound and one outbound child elements. Leaving any of these child elements out results in no QoS being applied on that traffic direction. Therefore, to shape only a domain's incoming traffic, use inbound only, and vice versa. Each of these elements has one mandatory attribute average (or floor as described below). Average specifies the average bit rate on the interface being shaped. In addition, there are two optional attributes: peak - This attribute specifies the maximum rate at which the bridge can send data, in kilobytes a second. A limitation of this implementation is this attribute in the outbound element is ignored, as Linux ingress filters do not know it yet. burst - Specifies the amount of bytes that can be burst at peak speed. Accepted values for attributes are integer numbers. The units for average and peak attributes are kilobytes per second, whereas burst is only set in kilobytes. In addition, inbound traffic can optionally have a floor attribute. This guarantees minimal throughput for shaped interfaces. Using the floor requires that all traffic goes through one point where QoS decisions can take place. As such, it may only be used in cases where the interface type='network'/ with a forward type of route , nat , or no forward at all). Noted that within a virtual network, all connected interfaces are required to have at least the inbound QoS set ( average at least) but the floor attribute does not require specifying average . However, peak and burst attributes still require average . At the present time, ingress qdiscs may not have any classes, and therefore, floor may only be applied only on inbound and not outbound traffic. To specify the QoS configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='network'> <source network='default'/> <target dev='vnet0'/> <bandwidth> <inbound average='1000' peak='5000' floor='200' burst='1024'/> <outbound average='128' peak='256' burst='256'/> </bandwidth> </interface> <devices> ... Figure 23.62. Quality of service 23.17.8.15. Setting VLAN tag (on supported network types only) To specify the VLAN tag configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='bridge'> <vlan> <tag id='42'/> </vlan> <source bridge='ovsbr0'/> <virtualport type='openvswitch'> <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> <devices> ... Figure 23.63. Setting VLAN tag (on supported network types only) If the network connection used by the guest virtual machine supports VLAN tagging transparent to the guest virtual machine, an optional vlan element can specify one or more VLAN tags to apply to the guest virtual machine's network traffic. Only OpenvSwitch and type='hostdev' SR-IOV interfaces support transparent VLAN tagging of guest virtual machine traffic; other interfaces, including standard Linux bridges and libvirt's own virtual networks, do not support it. 802.1Qbh (vn-link) and 802.1Qbg (VEPA) switches provide their own methods (outside of libvirt) to tag guest virtual machine traffic onto specific VLANs. To allow for specification of multiple tags (in the case of VLAN trunking), the tag subelement specifies which VLAN tag to use (for example, tag id='42'/ ). If an interface has more than one vlan element defined, it is assumed that the user wants to do VLAN trunking using all the specified tags. If VLAN trunking with a single tag is needed, the optional attribute trunk='yes' can be added to the top-level vlan element. 23.17.8.16. Modifying virtual link state This element sets the virtual network link state. Possible values for attribute state are up and down . If down is specified as the value, the interface behaves as the network cable is disconnected. Default behavior if this element is unspecified is up . To specify the virtual link state configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <interface type='network'> <source network='default'/> <target dev='vnet0'/> <link state='down'/> </interface> <devices> ... Figure 23.64. Modifying virtual link state 23.17.9. Input Devices Input devices allow interaction with the graphical framebuffer in the guest virtual machine. When enabling the framebuffer, an input device is automatically provided. It may be possible to add additional devices explicitly, for example to provide a graphics tablet for absolute cursor movement. To specify the input device configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <input type='mouse' bus='usb'/> </devices> ... Figure 23.65. Input devices The <input> element has one mandatory attribute: type , which can be set to mouse or tablet . tablet provides absolute cursor movement, while mouse uses relative movement. The optional bus attribute can be used to refine the exact device type and can be set to kvm (paravirtualized), ps2 , and usb . The input element has an optional sub-element <address> , which can tie the device to a particular PCI slot, as documented above. 23.17.10. Hub Devices A hub is a device that expands a single port into several so that there are more ports available to connect devices to a host physical machine system. To specify the hub device configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <hub type='usb'/> </devices> ... Figure 23.66. Hub devices The hub element has one mandatory attribute, type , which can only be set to usb . The hub element has an optional sub-element, address , with type='usb' , which can tie the device to a particular controller. 23.17.11. Graphical Framebuffers A graphics device allows for graphical interaction with the guest virtual machine operating system. A guest virtual machine will typically have either a framebuffer or a text console configured to allow interaction with the user. To specify the graphical framebuffer device configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <graphics type='sdl' display=':0.0'/> <graphics type='vnc' port='5904'> <listen type='address' address='1.2.3.4'/> </graphics> <graphics type='rdp' autoport='yes' multiUser='yes' /> <graphics type='desktop' fullscreen='yes'/> <graphics type='spice'> <listen type='network' network='rednet'/> </graphics> </devices> ... Figure 23.67. Graphical framebuffers The graphics element has a mandatory type attribute, which takes the value sdl , vnc , rdp , desktop or spice , as explained in the tables below: Table 23.23. Graphical framebuffer main elements Parameter Description sdl This displays a window on the host physical machine desktop. It accepts the following optional arguments: A display attribute for the display to use An xauth attribute for the authentication identifier An optional fullscreen attribute accepting values yes or no vnc Starts a VNC server. The port attribute specifies the TCP port number (with -1 as legacy syntax indicating that it should be auto-allocated). The autoport attribute is the preferred syntax for indicating auto-allocation of the TCP port to use. The listen attribute is an IP address for the server to listen on. The passwd attribute provides a VNC password in clear text. The keymap attribute specifies the keymap to use. It is possible to set a limit on the validity of the password be giving an timestamp passwdValidTo='2010-04-09T15:51:00' assumed to be in UTC. The connected attribute allows control of connected client during password changes. VNC accepts the keep value only; note that it may not be supported by all hypervisors. Rather than using listen/port, KVM supports a socket attribute for listening on a UNIX domain socket path. spice Starts a SPICE server. The port attribute specifies the TCP port number (with -1 as legacy syntax indicating that it should be auto-allocated), while tlsPort gives an alternative secure port number. The autoport attribute is the new preferred syntax for indicating auto-allocation of both port numbers. The listen attribute is an IP address for the server to listen on. The passwd attribute provides a SPICE password in clear text. The keymap attribute specifies the keymap to use. It is possible to set a limit on the validity of the password be giving an timestamp passwdValidTo='2010-04-09T15:51:00' assumed to be in UTC. The connected attribute allows control of a connected client during password changes. SPICE accepts keep to keep a client connected, disconnect to disconnect the client and fail to fail changing password. Note that this is not supported by all hypervisors. The defaultMode attribute sets the default channel security policy; valid values are secure , insecure and the default any (which is secure if possible, but falls back to insecure rather than erroring out if no secure path is available). When SPICE has both a normal and a TLS-secured TCP port configured, it may be desirable to restrict what channels can be run on each port. To do this, add one or more channel elements inside the main graphics element. Valid channel names include main , display , inputs , cursor , playback , record , smartcard , and usbredir . To specify the SPICE configuration settings, use a mangement tool to make the following changes to the domain XML: <graphics type='spice' port='-1' tlsPort='-1' autoport='yes'> <channel name='main' mode='secure'/> <channel name='record' mode='insecure'/> <image compression='auto_glz'/> <streaming mode='filter'/> <clipboard copypaste='no'/> <mouse mode='client'/> </graphics> Figure 23.68. Sample SPICE configuration SPICE supports variable compression settings for audio, images and streaming. These settings are configured using the compression attribute in the following elements: image to set image compression (accepts auto_glz , auto_lz , quic , glz , lz , off ) jpeg for JPEG compression for images over WAN (accepts auto , never , always ) zlib for configuring WAN image compression (accepts auto , never , always ) and playback for enabling audio stream compression (accepts on or off ) The streaming element sets streaming mode. The mode attribute can be set to filter , all or off . In addition, copy and paste functionality (through the SPICE agent) is set by the clipboard element. It is enabled by default, and can be disabled by setting the copypaste property to no . The mouse element sets mouse mode. The mode attribute can be set to server or client . If no mode is specified, the KVM default will be used ( client mode). Additional elements include: Table 23.24. Additional graphical framebuffer elements Parameter Description rdp Starts an RDP server. The port attribute specifies the TCP port number (with -1 as legacy syntax indicating that it should be auto-allocated). The autoport attribute is the preferred syntax for indicating auto-allocation of the TCP port to use. The replaceUser attribute is a boolean deciding whether multiple simultaneous connections to the virtual machine are permitted. The multiUser attribute decides whether the existing connection must be dropped and a new connection must be established by the VRDP server, when a new client connects in single connection mode. desktop This value is currently reserved for VirtualBox domains. It displays a window on the host physical machine desktop, similarly to sdl , but uses the VirtualBox viewer. Just like sdl , it accepts the optional attributes display and fullscreen . listen Rather than inputting the address information used to set up the listening socket for graphics types vnc and spice , the listen attribute, a separate sub-element of graphics , can be specified (see the examples above). listen accepts the following attributes: type - Set to either address or network . This tells whether this listen element is specifying the address to be used directly, or by naming a network (which will then be used to determine an appropriate address for listening). address - This attribute will contain either an IP address or host name (which will be resolved to an IP address via a DNS query) to listen on. In the "live" XML of a running domain, this attribute will be set to the IP address used for listening, even if type='network' . network - If type='network' , the network attribute will contain the name of a network in libvirt's list of configured networks. The named network configuration will be examined to determine an appropriate listen address. For example, if the network has an IPv4 address in its configuration (for example, if it has a forward type of route, NAT, or an isolated type), the first IPv4 address listed in the network's configuration will be used. If the network is describing a host physical machine bridge, the first IPv4 address associated with that bridge device will be used. If the network is describing one of the 'direct' (macvtap) modes, the first IPv4 address of the first forward dev will be used. 23.17.12. Video Devices To specify the video device configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <video> <model type='vga' vram='8192' heads='1'> <acceleration accel3d='yes' accel2d='yes'/> </model> </video> </devices> ... Figure 23.69. Video devices The graphics element has a mandatory type attribute which takes the value "sdl", "vnc", "rdp" or "desktop" as explained below: Table 23.25. Graphical framebuffer elements Parameter Description video The video element is the container for describing video devices. For backwards compatibility, if no video is set but there is a graphics element in the domain XML, then libvirt will add a default video according to the guest virtual machine type. If "ram" or "vram" are not supplied, a default value is used. model This has a mandatory type attribute which takes the value vga , cirrus , vmvga , kvm , vbox , or qxl depending on the hypervisor features available. You can also provide the amount of video memory in kibibytes (blocks of 1024 bytes) using vram and the number of figure with heads. acceleration If acceleration is supported it should be enabled using the accel3d and accel2d attributes in the acceleration element. address The optional address sub-element can be used to tie the video device to a particular PCI slot. 23.17.13. Consoles, Serial, and Channel Devices A character device provides a way to interact with the virtual machine. Paravirtualized consoles, serial ports, and channels are all classed as character devices and are represented using the same syntax. To specify the consoles, channel and other device configuration settings, use a management tool to make the following changes to the domain XML: ... <devices> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> </serial> <console type='pty'> <source path='/dev/pts/4'/> <target port='0'/> </console> <channel type='unix'> <source mode='bind' path='/tmp/guestfwd'/> <target type='guestfwd' address='10.0.2.1' port='4600'/> </channel> </devices> ... Figure 23.70. Consoles, serial, and channel devices In each of these directives, the top-level element name ( serial , console , channel ) describes how the device is presented to the guest virtual machine. The guest virtual machine interface is configured by the target element. The interface presented to the host physical machine is given in the type attribute of the top-level element. The host physical machine interface is configured by the source element. The source element may contain an optional seclabel to override the way that labeling is done on the socket path. If this element is not present, the security label is inherited from the per-domain setting. Each character device element has an optional sub-element address which can tie the device to a particular controller or PCI slot. Note Parallel ports, as well as the isa-parallel device, are no longer supported. 23.17.14. Guest Virtual Machine Interfaces A character device presents itself to the guest virtual machine as one of the following types. To set the serial port, use a management tool to make the following change to the domain XML: ... <devices> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> </serial> </devices> ... Figure 23.71. Guest virtual machine interface serial port <target> can have a port attribute, which specifies the port number. Ports are numbered starting from 0. There are usually 0, 1 or 2 serial ports. There is also an optional type attribute, which has two choices for its value, isa-serial or usb-serial . If type is missing, isa-serial will be used by default. For usb-serial , an optional sub-element <address> with type='usb' can tie the device to a particular controller, documented above. The <console> element is used to represent interactive consoles. Depending on the type of guest virtual machine in use, the consoles might be paravirtualized devices, or they might be a clone of a serial device, according to the following rules: If no targetType attribute is set, then the default device type is according to the hypervisor's rules. The default type will be added when re-querying the XML fed into libvirt. For fully virtualized guest virtual machines, the default device type will usually be a serial port. If the targetType attribute is serial , and if no <serial> element exists, the console element will be copied to the <serial> element. If a <serial> element does already exist, the console element will be ignored. If the targetType attribute is not serial , it will be treated normally. Only the first <console> element may use a targetType of serial . Secondary consoles must all be paravirtualized. On s390, the console element may use a targetType of sclp or sclplm (line mode). SCLP is the native console type for s390. There is no controller associated to SCLP consoles. In the example below, a virtio console device is exposed in the guest virtual machine as /dev/hvc[0-7] (for more information, see the Fedora project's virtio-serial page ): ... <devices> <console type='pty'> <source path='/dev/pts/4'/> <target port='0'/> </console> <!-- KVM virtio console --> <console type='pty'> <source path='/dev/pts/5'/> <target type='virtio' port='0'/> </console> </devices> ... ... <devices> <!-- KVM s390 sclp console --> <console type='pty'> <source path='/dev/pts/1'/> <target type='sclp' port='0'/> </console> </devices> ... Figure 23.72. Guest virtual machine interface - virtio console device If the console is presented as a serial port, the <target> element has the same attributes as for a serial port. There is usually only one console. 23.17.15. Channel This represents a private communication channel between the host physical machine and the guest virtual machine. It is manipulated by making changes to a guest virtual machine using a management tool to edit following section of the domain XML: ... <devices> <channel type='unix'> <source mode='bind' path='/tmp/guestfwd'/> <target type='guestfwd' address='10.0.2.1' port='4600'/> </channel> <!-- KVM virtio channel --> <channel type='pty'> <target type='virtio' name='arbitrary.virtio.serial.port.name'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/kvm/f16x86_64.agent'/> <target type='virtio' name='org.kvm.guest_agent.0'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> </channel> </devices> ... Figure 23.73. Channel This can be implemented in a variety of ways. The specific type of <channel> is given in the type attribute of the <target> element. Different channel types have different target attributes as follows: guestfwd - Dictates that TCP traffic sent by the guest virtual machine to a given IP address and port is forwarded to the channel device on the host physical machine. The target element must have address and port attributes. virtio - a paravirtualized virtio channel. In a Linux guest operating system, the <channel> configuration changes the content of /dev/vport* files. If the optional element name is specified, the configuration instead uses a /dev/virtio-ports/USDname file. For more information, see the Fedora project's virtio-serial page . The optional element address can tie the channel to a particular type='virtio-serial' controller, documented above. With KVM, if name is "org.kvm.guest_agent.0", then libvirt can interact with a guest agent installed in the guest virtual machine, for actions such as guest virtual machine shutdown or file system quiescing. spicevmc - Paravirtualized SPICE channel. The domain must also have a SPICE server as a graphics device, at which point the host physical machine piggy-backs messages across the main channel. The target element must be present, with attribute type='virtio'; an optional attribute name controls how the guest virtual machine will have access to the channel, and defaults to name='com.redhat.spice.0' . The optional <address> element can tie the channel to a particular type='virtio-serial' controller. 23.17.16. Host Physical Machine Interface A character device presents itself to the host physical machine as one of the following types: Table 23.26. Character device elements Parameter Description XML snippet Domain logfile Disables all input on the character device, and sends output into the virtual machine's logfile. Device logfile A file is opened and all data sent to the character device is written to the file. Note that the destination directory must have the virt_log_t SELinux label for a guest with this setting to start successfully. Virtual console Connects the character device to the graphical framebuffer in a virtual console. This is typically accessed using a special hotkey sequence such as "ctrl+alt+3". Null device Connects the character device to the void. No data is ever provided to the input. All data written is discarded. Pseudo TTY A Pseudo TTY is allocated using /dev/ptmx . A suitable client such as virsh console can connect to interact with the serial port locally. NB Special case NB special case if <console type='pty'> , then the TTY path is also duplicated as an attribute tty='/dev/pts/3' on the top level <console> tag. This provides compat with existing syntax for <console> tags. Host physical machine device proxy The character device is passed through to the underlying physical character device. The device types must match, for example the emulated serial port should only be connected to a host physical machine serial port - do not connect a serial port to a parallel port. Named pipe The character device writes output to a named pipe. See the pipe(7) man page for more info. TCP client-server The character device acts as a TCP client connecting to a remote server. Or as a TCP server waiting for a client connection. Alternatively you can use telnet instead of raw TCP. In addition, you can also use telnets (secure telnet) and tls. UDP network console The character device acts as a UDP netconsole service, sending and receiving packets. This is a lossy service. UNIX domain socket client-server The character device acts as a UNIX domain socket server, accepting connections from local clients. 23.17.17. Sound Devices A virtual sound card can be attached to the host physical machine using the sound element. ... <devices> <sound model='ac97'/> </devices> ... Figure 23.74. Virtual sound card The sound element has one mandatory attribute, model , which specifies what real sound device is emulated. Valid values are specific to the underlying hypervisor, though typical choices are 'sb16' , 'ac97' , and 'ich6' . In addition, a sound element with 'ich6' model set can have optional codec sub-elements to attach various audio codecs to the audio device. If not specified, a default codec will be attached to allow playback and recording. Valid values are 'duplex' (advertises a line-in and a line-out) and 'micro' (advertises a speaker and a microphone). ... <devices> <sound model='ich6'> <codec type='micro'/> <sound/> </devices> ... Figure 23.75. Sound Devices Each sound element has an optional sub-element <address> which can tie the device to a particular PCI slot, documented above. Note The es1370 sound device is no longer supported in Red Hat Enterprise Linux 7. Use ac97 instead. 23.17.18. Watchdog Device A virtual hardware watchdog device can be added to the guest virtual machine using the <watchdog> element. The watchdog device requires an additional driver and management daemon in the guest virtual machine. Currently there is no support notification when the watchdog fires. ... <devices> <watchdog model='i6300esb'/> </devices> ... ... <devices> <watchdog model='i6300esb' action='poweroff'/> </devices> ... Figure 23.76. Watchdog Device The following attributes are declared in this XML: model - The required model attribute specifies what real watchdog device is emulated. Valid values are specific to the underlying hypervisor. The model attribute may take the following values: i6300esb - the recommended device, emulating a PCI Intel 6300ESB ib700 - emulates an ISA iBase IB700 action - The optional action attribute describes what action to take when the watchdog expires. Valid values are specific to the underlying hypervisor. The action attribute can have the following values: reset - default setting, forcefully resets the guest virtual machine shutdown - gracefully shuts down the guest virtual machine (not recommended) poweroff - forcefully powers off the guest virtual machine pause - pauses the guest virtual machine none - does nothing dump - automatically dumps the guest virtual machine. Note that the 'shutdown' action requires that the guest virtual machine is responsive to ACPI signals. In the sort of situations where the watchdog has expired, guest virtual machines are usually unable to respond to ACPI signals. Therefore, using 'shutdown' is not recommended. In addition, the directory to save dump files can be configured by auto_dump_path in file /etc/libvirt/kvm.conf. 23.17.19. Setting a Panic Device Red Hat Enterprise Linux 7 hypervisor is capable of detecting Linux guest virtual machine kernel panics, using the pvpanic mechanism. When invoked, pvpanic sends a message to the libvirtd daemon, which initiates a preconfigured reaction. To enable the pvpanic device, do the following: Add or uncomment the following line in the /etc/libvirt/qemu.conf file on the host machine. Run the virsh edit command to edit domain XML file of the specified guest, and add the panic into the devices parent element. <devices> <panic> <address type='isa' iobase='0x505'/> </panic> </devices> The <address> element specifies the address of panic. The default ioport is 0x505. In most cases, specifying an address is not needed. The way in which libvirtd reacts to the crash is determined by the <on_crash> element of the domain XML. The possible actions are as follows: coredump-destroy - Captures the guest virtual machine's core dump and shuts the guest down. coredump-restart - Captures the guest virtual machine's core dump and restarts the guest. preserve - Halts the guest virtual machine to await further action. Note If the kdump service is enabled, it takes precedence over the <on_crash> setting, and the selected <on_crash> action is not performed. For more information on pvpanic , see the related Knowledgebase article . 23.17.20. Memory Balloon Device The balloon device can designate a part of a virtual machine's RAM as not being used (a process known as inflating the balloon), so that the memory can be freed for the host, or for other virtual machines on that host, to use. When the virtual machine needs the memory again, the balloon can be deflated and the host can distribute the RAM back to the virtual machine. The size of the memory balloon is determined by the difference between the <currentMemory> and <memory> settings. For example, if <memory> is set to 2 GiB and <currentMemory> to 1 GiB, the balloon contains 1 GiB. If manual configuration is necessary, the <currentMemory> value can be set by using the virsh setmem command and the <memory> value can be set by using the virsh setmaxmem command. Warning If modifying the <currentMemory> value, make sure to leave sufficient memory for the guest OS to work properly. If the set value is too low, the guest may become unstable. A virtual memory balloon device is automatically added to all KVM guest virtual machines. In the XML configuration, this is represented by the <memballoon> element. Memory ballooning is managed by the libvirt service, and will be automatically added when appropriate. Therefore, it is not necessary to explicitly add this element in the guest virtual machine XML unless a specific PCI slot needs to be assigned. Note that if the <memballoon> device needs to be explicitly disabled, model='none' can be be used for this purpose. The following example a shows a memballoon device automatically added by libvirt : ... <devices> <memballoon model='virtio'/> </devices> ... Figure 23.77. Memory balloon device The following example shows a device that has been added manually with static PCI slot 2 requested: ... <devices> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </memballoon> </devices> ... Figure 23.78. Memory balloon device added manually The required model attribute specifies what type of balloon device is provided. Valid values are specific to the virtualization platform; in the KVM hypervisor, 'virtio' is the default setting.
[ "<devices> <emulator>/usr/libexec/qemu-kvm</emulator> </devices>", "<disk type='network'> <driver name=\"qemu\" type=\"raw\" io=\"threads\" ioeventfd=\"on\" event_idx=\"off\"/> <source protocol=\"sheepdog\" name=\"image_name\"> <host name=\"hostname\" port=\"7000\"/> </source> <target dev=\"hdb\" bus=\"ide\"/> <boot order='1'/> <transient/> <address type='drive' controller='0' bus='1' unit='0'/> </disk>", "<disk type='network'> <driver name=\"qemu\" type=\"raw\"/> <source protocol=\"rbd\" name=\"image_name2\"> <host name=\"hostname\" port=\"7000\"/> </source> <target dev=\"hdd\" bus=\"ide\"/> <auth username='myuser'> <secret type='ceph' usage='mypassid'/> </auth> </disk>", "<disk type='block' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol=\"http\" name=\"url_path\"> <host name=\"hostname\" port=\"80\"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk>", "<disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol=\"https\" name=\"url_path\"> <host name=\"hostname\" port=\"443\"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol=\"ftp\" name=\"url_path\"> <host name=\"hostname\" port=\"21\"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk>", "<disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol=\"ftps\" name=\"url_path\"> <host name=\"hostname\" port=\"990\"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='network' device='cdrom'> <driver name='qemu' type='raw'/> <source protocol=\"tftp\" name=\"url_path\"> <host name=\"hostname\" port=\"69\"/> </source> <target dev='hdc' bus='ide' tray='open'/> <readonly/> </disk> <disk type='block' device='lun'> <driver name='qemu' type='raw'/> <source dev='/dev/sda'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='3' unit='0'/> </disk>", "<disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/sda'/> <geometry cyls='16383' heads='16' secs='63' trans='lba'/> <blockio logical_block_size='512' physical_block_size='4096'/> <target dev='hda' bus='ide'/> </disk> <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='blk-pool0' volume='blk-pool0-vol0'/> <target dev='hda' bus='ide'/> </disk> <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/2'> <host name='example.com' port='3260'/> </source> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='vda' bus='virtio'/> </disk>", "<disk type='network' device='lun'> <driver name='qemu' type='raw'/> <source protocol='iscsi' name='iqn.2013-07.com.example:iscsi-nopool/1'> iqn.2013-07.com.example:iscsi-pool <host name='example.com' port='3260'/> </source> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='sda' bus='scsi'/> </disk> <disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='iscsi-pool' volume='unit:0:0:1' mode='host'/> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='vda' bus='virtio'/> </disk>", "<disk type='volume' device='disk'> <driver name='qemu' type='raw'/> <source pool='iscsi-pool' volume='unit:0:0:2' mode='direct'/> <auth username='myuser'> <secret type='chap' usage='libvirtiscsi'/> </auth> <target dev='vda' bus='virtio'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/tmp/test.img' startupPolicy='optional'/> <target dev='sdb' bus='scsi'/> <readonly/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' discard='unmap'/> <source file='/var/lib/libvirt/images/discard1.img'/> <target dev='vdb' bus='virtio'/> <alias name='virtio-disk1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </disk> </devices>", "<devices> <controller type='ide' index='0'/> <controller type='virtio-serial' index='0' ports='16' vectors='4'/> <controller type='virtio-serial' index='1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> <controller type='scsi' index='0' model='virtio-scsi' num_queues='8'/> </controller> </devices>", "<devices> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0' bus='0' slot='4' function='7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0' bus='0' slot='4' function='0' multifunction='on'/> </controller> </devices>", "<devices> <lease> <lockspace>somearea</lockspace> <key>somekey</key> <target path='/some/lease/path' offset='1024'/> </lease> </devices>", "<devices> <hostdev mode='subsystem' type='usb'> <source startupPolicy='optional'> <vendor id='0x1234'/> <product id='0xbeef'/> </source> <boot order='2'/> </hostdev> </devices>", "<devices> <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address bus='0x06' slot='0x02' function='0x0'/> </source> <boot order='1'/> <rom bar='on' file='/etc/fake/boot.bin'/> </hostdev> </devices>", "<devices> <hostdev mode='subsystem' type='scsi'> <source> <adapter name='scsi_host0'/> <address type='scsi' bus='0' target='0' unit='0'/> </source> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </hostdev> </devices> ..", "<hostdev mode='capabilities' type='storage'> <source> <block>/dev/sdf1</block> </source> </hostdev>", "<hostdev mode='capabilities' type='misc'> <source> <char>/dev/input/event3</char> </source> </hostdev>", "<hostdev mode='capabilities' type='net'> <source> <interface>eth0</interface> </source> </hostdev>", "<devices> <redirdev bus='usb' type='tcp'> <source mode='connect' host='localhost' service='4000'/> <boot order='1'/> </redirdev> <redirfilter> <usbdev class='0x08' vendor='0x1234' product='0xbeef' version='2.00' allow='yes'/> <usbdev allow='no'/> </redirfilter> </devices>", "<devices> <smartcard mode='host'/> <smartcard mode='host-certificates'> <certificate>cert1</certificate> <certificate>cert2</certificate> <certificate>cert3</certificate> <database>/etc/pki/nssdb/</database> </smartcard> <smartcard mode='passthrough' type='tcp'> <source mode='bind' host='127.0.0.1' service='2001'/> <protocol type='raw'/> <address type='ccid' controller='0' slot='0'/> </smartcard> <smartcard mode='passthrough' type='spicevmc'/> </devices>", "<devices> <interface type='direct' trustGuestRxFilters='yes'> <source dev='eth0'/> <mac address='52:54:00:5d:c7:9e'/> <boot order='1'/> <rom bar='off'/> </interface> </devices>", "<devices> <interface type='network'> <source network='default'/> </interface> <interface type='network'> <source network='default' portgroup='engineering'/> <target dev='vnet7'/> <mac address=\"00:11:22:33:44:55\"/> <virtualport> <parameters instanceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> </devices>", "<devices> <interface type='bridge'> <source bridge='br0'/> </interface> <interface type='bridge'> <source bridge='br1'/> <target dev='vnet7'/> <mac address=\"00:11:22:33:44:55\"/> </interface> <interface type='bridge'> <source bridge='ovsbr'/> <virtualport type='openvswitch'> <parameters profileid='menial' interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> </devices>", "<forward mode='nat'> <address start='1.2.3.4' end='1.2.3.10'/> </forward>", "<devices> <interface type='user'/> <interface type='user'> <mac address=\"00:11:22:33:44:55\"/> </interface> </devices>", "<devices> <interface type='ethernet'/> <interface type='ethernet'> <target dev='vnet7'/> <script path='/etc/qemu-ifup-mynet'/> </interface> </devices>", "<devices> <interface type='direct'> <source dev='eth0' mode='vepa'/> </interface> </devices>", "<devices> <interface type='direct'> <source dev='eth0.2' mode='vepa'/> <virtualport type=\"802.1Qbg\"> <parameters managerid=\"11\" typeid=\"1193047\" typeidversion=\"2\" instanceid=\"09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f\"/> </virtualport> </interface> </devices>", "<devices> <interface type='direct'> <source dev='eth0' mode='private'/> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices>", "<devices> <interface type='hostdev'> <driver name='vfio'/> <source> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </source> <mac address='52:54:00:6d:90:02'> <virtualport type='802.1Qbh'> <parameters profileid='finance'/> </virtualport> </interface> </devices>", "<devices> <interface type='mcast'> <mac address='52:54:00:6d:90:01'> <source address='230.0.0.1' port='5558'/> </interface> </devices>", "<devices> <interface type='server'> <mac address='52:54:00:22:c9:42'> <source address='192.168.0.1' port='5558'/> </interface> <interface type='client'> <mac address='52:54:00:8b:c9:51'> <source address='192.168.0.1' port='5558'/> </interface> </devices>", "<devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <model type='virtio'/> <driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'/> </interface> </devices>", "<devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> </interface> </devices>", "<devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <boot order='1'/> </interface> </devices>", "<devices> <interface type='network'> <source network='default'/> <target dev='vnet1'/> <rom bar='on' file='/etc/fake/boot.bin'/> </interface> </devices>", "<devices> <interface type='network'> <source network='default'/> <target dev='vnet0'/> <bandwidth> <inbound average='1000' peak='5000' floor='200' burst='1024'/> <outbound average='128' peak='256' burst='256'/> </bandwidth> </interface> <devices>", "<devices> <interface type='bridge'> <vlan> <tag id='42'/> </vlan> <source bridge='ovsbr0'/> <virtualport type='openvswitch'> <parameters interfaceid='09b11c53-8b5c-4eeb-8f00-d84eaa0aaa4f'/> </virtualport> </interface> <devices>", "<devices> <interface type='network'> <source network='default'/> <target dev='vnet0'/> <link state='down'/> </interface> <devices>", "<devices> <input type='mouse' bus='usb'/> </devices>", "<devices> <hub type='usb'/> </devices>", "<devices> <graphics type='sdl' display=':0.0'/> <graphics type='vnc' port='5904'> <listen type='address' address='1.2.3.4'/> </graphics> <graphics type='rdp' autoport='yes' multiUser='yes' /> <graphics type='desktop' fullscreen='yes'/> <graphics type='spice'> <listen type='network' network='rednet'/> </graphics> </devices>", "<graphics type='spice' port='-1' tlsPort='-1' autoport='yes'> <channel name='main' mode='secure'/> <channel name='record' mode='insecure'/> <image compression='auto_glz'/> <streaming mode='filter'/> <clipboard copypaste='no'/> <mouse mode='client'/> </graphics>", "<devices> <video> <model type='vga' vram='8192' heads='1'> <acceleration accel3d='yes' accel2d='yes'/> </model> </video> </devices>", "<devices> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> </serial> <console type='pty'> <source path='/dev/pts/4'/> <target port='0'/> </console> <channel type='unix'> <source mode='bind' path='/tmp/guestfwd'/> <target type='guestfwd' address='10.0.2.1' port='4600'/> </channel> </devices>", "<devices> <serial type='pty'> <source path='/dev/pts/3'/> <target port='0'/> </serial> </devices>", "<devices> <console type='pty'> <source path='/dev/pts/4'/> <target port='0'/> </console> <!-- KVM virtio console --> <console type='pty'> <source path='/dev/pts/5'/> <target type='virtio' port='0'/> </console> </devices> <devices> <!-- KVM s390 sclp console --> <console type='pty'> <source path='/dev/pts/1'/> <target type='sclp' port='0'/> </console> </devices>", "<devices> <channel type='unix'> <source mode='bind' path='/tmp/guestfwd'/> <target type='guestfwd' address='10.0.2.1' port='4600'/> </channel> <!-- KVM virtio channel --> <channel type='pty'> <target type='virtio' name='arbitrary.virtio.serial.port.name'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/kvm/f16x86_64.agent'/> <target type='virtio' name='org.kvm.guest_agent.0'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> </channel> </devices>", "<devices> <console type='stdio'> <target port='1'/> </console> </devices>", "<devices> <serial type=\"file\"> <source path=\"/var/log/vm/vm-serial.log\"/> <target port=\"1\"/> </serial> </devices>", "<devices> <serial type='vc'> <target port=\"1\"/> </serial> </devices>", "<devices> <serial type='null'> <target port=\"1\"/> </serial> </devices>", "<devices> <serial type=\"pty\"> <source path=\"/dev/pts/3\"/> <target port=\"1\"/> </serial> </devices>", "<devices> <serial type=\"dev\"> <source path=\"/dev/ttyS0\"/> <target port=\"1\"/> </serial> </devices>", "<devices> <serial type=\"pipe\"> <source path=\"/tmp/mypipe\"/> <target port=\"1\"/> </serial> </devices>", "<devices> <serial type=\"tcp\"> <source mode=\"connect\" host=\"0.0.0.0\" service=\"2445\"/> <protocol type=\"raw\"/> <target port=\"1\"/> </serial> </devices>", "<devices> <serial type=\"tcp\"> <source mode=\"bind\" host=\"127.0.0.1\" service=\"2445\"/> <protocol type=\"raw\"/> <target port=\"1\"/> </serial> </devices>", "<devices> <serial type=\"tcp\"> <source mode=\"connect\" host=\"0.0.0.0\" service=\"2445\"/> <protocol type=\"telnet\"/> <target port=\"1\"/> </serial> <serial type=\"tcp\"> <source mode=\"bind\" host=\"127.0.0.1\" service=\"2445\"/> <protocol type=\"telnet\"/> <target port=\"1\"/> </serial> </devices>", "<devices> <serial type=\"udp\"> <source mode=\"bind\" host=\"0.0.0.0\" service=\"2445\"/> <source mode=\"connect\" host=\"0.0.0.0\" service=\"2445\"/> <target port=\"1\"/> </serial> </devices>", "<devices> <serial type=\"unix\"> <source mode=\"bind\" path=\"/tmp/foo\"/> <target port=\"1\"/> </serial> </devices>", "<devices> <sound model='ac97'/> </devices>", "<devices> <sound model='ich6'> <codec type='micro'/> <sound/> </devices>", "<devices> <watchdog model='i6300esb'/> </devices> <devices> <watchdog model='i6300esb' action='poweroff'/> </devices>", "auto_dump_path = \"/var/lib/libvirt/qemu/dump\"", "<devices> <panic> <address type='isa' iobase='0x505'/> </panic> </devices>", "<devices> <memballoon model='virtio'/> </devices>", "<devices> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </memballoon> </devices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-devices
Chapter 5. OpenShift Virtualization release notes
Chapter 5. OpenShift Virtualization release notes 5.1. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . 5.2. About Red Hat OpenShift Virtualization Red Hat OpenShift Virtualization enables you to bring traditional virtual machines (VMs) into OpenShift Container Platform where they run alongside containers, and are managed as native Kubernetes objects. OpenShift Virtualization is represented by the icon. You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider. Learn more about what you can do with OpenShift Virtualization . Learn more about OpenShift Virtualization architecture and deployments . Prepare your cluster for OpenShift Virtualization. 5.2.1. OpenShift Virtualization supported cluster version OpenShift Virtualization 4.13 is supported for use on OpenShift Container Platform 4.13 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform. Important Updating to OpenShift Virtualization 4.13 from OpenShift Virtualization 4.12.2 is not supported. 5.2.2. Supported guest operating systems To view the supported guest operating systems for OpenShift Virtualization, see Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization, OpenShift Virtualization and Red Hat Enterprise Linux with KVM . 5.3. New and changed features OpenShift Virtualization is FIPS ready. However, OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 has not yet been submitted for FIPS validation. Red Hat expects, though cannot commit to a specific timeframe, to obtain FIPS validation for RHEL 9.0 and RHEL 9.2 modules, and later even minor releases of RHEL 9.x. Updates will be available in Compliance Activities and Government Standards . OpenShift Virtualization is certified in Microsoft's Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads. The SVVP Certification applies to: Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 9 . Intel and AMD CPUs. OpenShift Virtualization now adheres to the restricted Kubernetes pod security standards profile. To learn more, see the OpenShift Virtualization security policies documentation. OpenShift Virtualization is now based on Red Hat Enterprise Linux (RHEL) 9. There is a new RHEL 9 machine type for VMs: machineType: pc-q35-rhel9.2.0 . All VM templates that are included with OpenShift Virtualization now use this machine type by default. For more information, see OpenShift Virtualization on RHEL 9 . You can now obtain the VirtualMachine , ConfigMap , and Secret manifests from the export server after you export a VM or snapshot. For more information, see accessing exported VM manifests . The "Logging, events, and monitoring" documentation is now called Support . The monitoring tools documentation has been moved to Monitoring . You can view and filter aggregated OpenShift Virtualization logs in the web console by using the LokiStack . 5.3.1. Quick starts Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Virtualization console and then select Quick Starts . You can filter the available tours by entering the virtualization keyword in the Filter field. 5.3.2. Networking You can now send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network when you use the OVN-Kubernetes CNI plugin. 5.3.3. Storage OpenShift Virtualization storage resources now migrate automatically to the beta API versions. Alpha API versions are no longer supported. 5.3.4. Web console On the VirtualMachine details page, the Scheduling , Environment , Network interfaces , Disks , and Scripts tabs are displayed on the new Configuration tab . You can now paste a string from your client's clipboard into the guest when using the VNC console. The VirtualMachine details Details tab now provides a new SSH service type SSH over LoadBalancer to expose the SSH service over a load balancer. The option to make a hot-plug volume a persistent volume is added to the Disks tab . There is now a VirtualMachine details Diagnostics tab where you can view the status conditions of VMs and the snapshot status of volumes. You can now enable headless mode for high performance VMs in the web console. 5.4. Deprecated and removed features 5.4.1. Deprecated features Deprecated features are included and supported in the current release. However, they will be removed in a future release and are not recommended for new deployments. Support for virtctl command line tool installation for Red Hat Enterprise Linux (RHEL) 7 and RHEL 9 by an RPM is deprecated and is planned to be removed in a future release. 5.4.2. Removed features Removed features are not supported in the current release. Red Hat Enterprise Linux 6 is no longer supported on OpenShift Virtualization. Support for the legacy HPP custom resource, and the associated storage class, has been removed for all new deployments. In OpenShift Virtualization 4.13, the HPP Operator uses the Kubernetes Container Storage Interface (CSI) driver to configure local storage. A legacy HPP custom resource is supported only if it had been installed on a version of OpenShift Virtualization. 5.5. Technology Preview features Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: Technology Preview Features Support Scope You can now use Prometheus to monitor the following metrics: kubevirt_vmi_cpu_system_usage_seconds returns the physical system CPU time consumed by the hypervisor. kubevirt_vmi_cpu_user_usage_seconds returns the physical user CPU time consumed by the hypervisor. kubevirt_vmi_cpu_usage_seconds returns the total CPU time used in seconds by calculating the sum of the vCPU and the hypervisor usage. You can now run a checkup to verify if your OpenShift Container Platform cluster node can run a virtual machine with a Data Plane Development Kit (DPDK) workload with zero packet loss. You can configure your virtual machine to run DPDK workloads to achieve lower latency and higher throughput for faster packet processing in the user space. You can now access a VM that is attached to a secondary network interface from outside the cluster by using its fully qualified domain name (FQDN). You can now create OpenShift Container Platform clusters with worker nodes that are hosted by OpenShift Virtualization VMs. For more information, see Managing hosted control plane clusters on OpenShift Virtualization in the Red Hat Advanced Cluster Management (RHACM) documentation. You can now use Microsoft Windows 11 as a guest operating system. However, OpenShift Virtualization 4.13 does not support USB disks, which are required for a critical function of BitLocker recovery. To protect recovery keys, use other methods described in the BitLocker recovery guide . 5.6. Bug fix The virtual machine snapshot restore operation no longer hangs indefinitely due to some persistent volume claim (PVC) annotations created by the Containerized Data Importer (CDI). ( BZ#2070366 ) 5.7. Known issues With the release of the RHSA-2023:3722 advisory, the TLS Extended Master Secret (EMS) extension ( RFC 7627 ) is mandatory for TLS 1.2 connections on FIPS-enabled RHEL 9 systems. This is in accordance with FIPS-140-3 requirements. TLS 1.3 is not affected. Legacy OpenSSL clients that do not support EMS or TLS 1.3 now cannot connect to FIPS servers running on RHEL 9. Similarly, RHEL 9 clients in FIPS mode cannot connect to servers that only support TLS 1.2 without EMS. This in practice means that these clients cannot connect to servers on RHEL 6, RHEL 7 and non-RHEL legacy operating systems. This is because the legacy 1.0.x versions of OpenSSL do not support EMS or TLS 1.3. For more information, see TLS Extension "Extended Master Secret" enforced with Red Hat Enterprise Linux 9.2 . As a workaround, upgrade legacy OpenSSL clients to a version that supports TLS 1.3 and configure OpenShift Virtualization to use TLS 1.3, with the Modern TLS security profile type, for FIPS mode. If you enabled the DisableMDEVConfiguration feature gate by editing the HyperConverged custom resource in OpenShift Virtualization 4.12.4, you must re-enable the feature gate after you upgrade to versions 4.13.0 or 4.13.1 by creating a JSON Patch annotation ( BZ#2184439 ): USD oc annotate --overwrite -n openshift-cnv hyperconverged kubevirt-hyperconverged \ kubevirt.kubevirt.io/jsonpatch='[{"op": "add","path": "/spec/configuration/developerConfiguration/featureGates/-", \ "value": "DisableMDEVConfiguration"}]' OpenShift Virtualization versions 4.12.2 and earlier are not compatible with OpenShift Container Platform 4.13. Updating OpenShift Container Platform to 4.13 is blocked by design in OpenShift Virtualization 4.12.1 and 4.12.2, but this restriction could not be added to OpenShift Virtualization 4.12.0. If you have OpenShift Virtualization 4.12.0, ensure that you do not update OpenShift Container Platform to 4.13. Important Your cluster becomes unsupported if you run incompatible versions of OpenShift Container Platform and OpenShift Virtualization. Enabling descheduler evictions on a virtual machine is a Technical Preview feature and might cause failed migrations and unstable scheduling. You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. ( BZ#2193267 ) When you use two pods with different SELinux contexts, VMs with the ocs-storagecluster-cephfs storage class fail to migrate and the VM status changes to Paused . This is because both pods try to access the shared ReadWriteMany CephFS volume at the same time. ( BZ#2092271 ) As a workaround, use the ocs-storagecluster-ceph-rbd storage class to live migrate VMs on a cluster that uses Red Hat Ceph Storage. If you clone more than 100 VMs using the csi-clone cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones might also fail. ( BZ#2055595 ) As a workaround, you can restart the ceph-mgr to purge the VM clones. If you stop a node on a cluster and then use the Node Health Check Operator to bring the node back up, connectivity to Multus might be lost. ( OCPBUGS-8398 ) The TopoLVM provisioner name string has changed in OpenShift Virtualization 4.12. As a result, the automatic import of operating system images might fail with the following error message ( BZ#2158521 ): DataVolume.storage spec is missing accessMode and volumeMode, cannot get access mode from StorageProfile. As a workaround: Update the claimPropertySets array of the storage profile: USD oc patch storageprofile <storage_profile> --type=merge -p '{"spec": {"claimPropertySets": [{"accessModes": ["ReadWriteOnce"], "volumeMode": "Block"}, \ {"accessModes": ["ReadWriteOnce"], "volumeMode": "Filesystem"}]}}' Delete the affected data volumes in the openshift-virtualization-os-images namespace. They are recreated with the access mode and volume mode from the updated storage profile. When restoring a VM snapshot for storage whose binding mode is WaitForFirstConsumer , the restored PVCs remain in the Pending state and the restore operation does not progress. As a workaround, start the restored VM, stop it, and then start it again. The VM will be scheduled, the PVCs will be in the Bound state, and the restore operation will complete. ( BZ#2149654 ) VMs created from common templates on a Single Node OpenShift (SNO) cluster display a VMCannotBeEvicted alert because the template's default eviction strategy is LiveMigrate . You can ignore this alert or remove the alert by updating the VM's eviction strategy. ( BZ#2092412 ) Uninstalling OpenShift Virtualization does not remove the feature.node.kubevirt.io node labels created by OpenShift Virtualization. You must remove the labels manually. ( CNV-22036 ) Windows 11 virtual machines do not boot on clusters running in FIPS mode . Windows 11 requires a TPM (trusted platform module) device by default. However, the swtpm (software TPM emulator) package is incompatible with FIPS. ( BZ#2089301 ) If your OpenShift Container Platform cluster uses OVN-Kubernetes as the default Container Network Interface (CNI) provider, you cannot attach a Linux bridge or bonding device to a host's default interface because of a change in the host network topology of OVN-Kubernetes. ( BZ#1885605 ) As a workaround, you can use a secondary network interface connected to your host, or switch to the OpenShift SDN default CNI provider. In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. ( BZ#1992753 ) As a workaround, avoid using a single PVC in read-write mode with multiple VMs. The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then openshift-monitoring sends a PodDisruptionBudgetAtLimit alert every 60 minutes for virtual machine images that use the LiveMigrate eviction strategy. ( BZ#2026733 ) As a workaround, silence alerts . OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. ( BZ#2037611 ) As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod. In a heterogeneous cluster with different compute nodes, virtual machines that have HyperV Reenlightenment enabled cannot be scheduled on nodes that do not support timestamp-counter scaling (TSC) or have the appropriate TSC frequency. ( BZ#2151169 ) If you deploy OpenShift Virtualization with Red Hat OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details. VMs that use logical volume management (LVM) with block storage devices require additional configuration to avoid conflicts with Red Hat Enterprise Linux CoreOS (RHCOS) hosts. As a workaround, you can create a VM, provision an LVM, and restart the VM. This creates an empty system.lvmdevices file. ( OCPBUGS-5223 )
[ "oc annotate --overwrite -n openshift-cnv hyperconverged kubevirt-hyperconverged kubevirt.kubevirt.io/jsonpatch='[{\"op\": \"add\",\"path\": \"/spec/configuration/developerConfiguration/featureGates/-\", \"value\": \"DisableMDEVConfiguration\"}]'", "DataVolume.storage spec is missing accessMode and volumeMode, cannot get access mode from StorageProfile.", "oc patch storageprofile <storage_profile> --type=merge -p '{\"spec\": {\"claimPropertySets\": [{\"accessModes\": [\"ReadWriteOnce\"], \"volumeMode\": \"Block\"}, {\"accessModes\": [\"ReadWriteOnce\"], \"volumeMode\": \"Filesystem\"}]}}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/virtualization/virt-4-13-release-notes
3.2. LVS via Direct Routing
3.2. LVS via Direct Routing As mentioned in Section 1.4.2, "Direct Routing" , direct routing allows real servers to process and route packets directly to a requesting user rather than passing outgoing packets through the LVS router. Direct routing requires that the real servers be physically connected to a network segment with the LVS router and be able to process and direct outgoing packets as well. Network Layout In a direct routing LVS setup, the LVS router needs to receive incoming requests and route them to the proper real server for processing. The real servers then need to directly route the response to the client. So, for example, if the client is on the Internet, and sends the packet through the LVS router to a real server, the real server must be able to go directly to the client via the Internet. This can be done by configuring a gateway for the real server to pass packets to the Internet. Each real server in the server pool can have its own separate gateway (and each gateway with its own connection to the Internet), allowing for maximum throughput and scalability. For typical LVS setups, however, the real servers can communicate through one gateway (and therefore one network connection). Important It is not recommended to use the LVS router as a gateway for the real servers, as that adds unneeded setup complexity as well as network load on the LVS router, which reintroduces the network bottleneck that exists in NAT routing. Hardware The hardware requirements of an LVS system using direct routing is similar to other LVS topologies. While the LVS router needs to be running Red Hat Enterprise Linux to process the incoming requests and perform load-balancing for the real servers, the real servers do not need to be Linux machines to function correctly. The LVS routers need one or two NICs each (depending on if there is a back-up router). You can use two NICs for ease of configuration and to distinctly separate traffic - incoming requests are handled by one NIC and routed packets to real servers on the other. Since the real servers bypass the LVS router and send outgoing packets directly to a client, a gateway to the Internet is required. For maximum performance and availability, each real server can be connected to its own separate gateway which has its own dedicated connection to the carrier network to which the client is connected (such as the Internet or an intranet). Software There is some configuration outside of Piranha Configuration Tool that needs to be done, especially for administrators facing ARP issues when using LVS via direct routing. Refer to Section 3.2.1, "Direct Routing and arptables_jf " or Section 3.2.2, "Direct Routing and iptables " for more information. 3.2.1. Direct Routing and arptables_jf In order to configure direct routing using arptables_jf , each real server must have their virtual IP address configured, so they can directly route packets. ARP requests for the VIP are ignored entirely by the real servers, and any ARP packets that might otherwise be sent containing the VIPs are mangled to contain the real server's IP instead of the VIPs. Using the arptables_jf method, applications may bind to each individual VIP or port that the real server is servicing. For example, the arptables_jf method allows multiple instances of Apache HTTP Server to be running bound explicitly to different VIPs on the system. There are also significant performance advantages to using arptables_jf over the iptables option. However, using the arptables_jf method, VIPs can not be configured to start on boot using standard Red Hat Enterprise Linux system configuration tools. To configure each real server to ignore ARP requests for each virtual IP addresses, perform the following steps: Create the ARP table entries for each virtual IP address on each real server (the real_ip is the IP the director uses to communicate with the real server; often this is the IP bound to eth0 ): This will cause the real servers to ignore all ARP requests for the virtual IP addresses, and change any outgoing ARP responses which might otherwise contain the virtual IP so that they contain the real IP of the server instead. The only node that should respond to ARP requests for any of the VIPs is the current active LVS node. Once this has been completed on each real server, save the ARP table entries by typing the following commands on each real server: service arptables_jf save chkconfig --level 2345 arptables_jf on The chkconfig command will cause the system to reload the arptables configuration on bootup - before the network is started. Configure the virtual IP address on all real servers using ifconfig to create an IP alias. For example: Or using the iproute2 utility ip , for example: As previously noted, the virtual IP addresses can not be configured to start on boot using the Red Hat system configuration tools. One way to work around this issue is to place these commands in /etc/rc.d/rc.local . Configure Piranha for Direct Routing. Refer to Chapter 4, Configuring the LVS Routers with Piranha Configuration Tool for more information.
[ "arptables -A IN -d <virtual_ip> -j DROP arptables -A OUT -d <virtual_ip> -j mangle --mangle-ip-s <real_ip>", "ifconfig eth0:1 192.168.76.24 netmask 255.255.252.0 broadcast 192.168.79.255 up", "ip addr add 192.168.76.24 dev eth0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-direct-VSA
Chapter 2. Configuring a GCP project
Chapter 2. Configuring a GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 2.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 2.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 2.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 2.2. Optional API services API service Console service name Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 2.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 2.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 2.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Compute Global 11 1 Forwarding rules Compute Global 2 0 In-use global IP addresses Compute Global 4 1 Health checks Compute Global 3 0 Images Compute Global 1 0 Networks Compute Global 2 0 Static IP addresses Compute Region 4 1 Routers Compute Global 1 0 Routes Compute Global 2 0 Subnetworks Compute Global 2 0 Target pools Compute Global 3 0 CPUs Compute Region 28 4 Persistent disk SSD (GB) Compute Region 896 128 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 2.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 2.5.1. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create a service account with the following permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin Role Administrator Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using the Cloud Credential Operator in passthrough mode Compute Load Balancer Admin The following roles are applied to the service accounts that the control plane and compute machines use: Table 2.4. GCP service account roles Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin roles/artifactregistry.reader 2.5.2. Required GCP permissions for installer-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If your organization's security policies require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the installer-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Example 2.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.globalAddresses.create compute.globalAddresses.get compute.globalAddresses.use compute.globalForwardingRules.create compute.globalForwardingRules.get compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.networks.use compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 2.2. Required permissions for creating load balancer resources compute.backendServices.create compute.backendServices.get compute.backendServices.list compute.backendServices.update compute.backendServices.use compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use compute.targetTcpProxies.create compute.targetTcpProxies.get compute.targetTcpProxies.use Example 2.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list Example 2.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 2.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.disks.setLabels compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 2.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 2.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly compute.regionHealthChecks.create compute.regionHealthChecks.get compute.regionHealthChecks.useReadOnly Example 2.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 2.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 2.10. Required IAM permissions for installation iam.roles.create iam.roles.get iam.roles.update Example 2.11. Required permissions when authenticating without a service account key iam.serviceAccounts.signBlob Example 2.12. Optional Images permissions for installation compute.images.list Example 2.13. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 2.14. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.addresses.setLabels compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.globalAddresses.delete compute.globalAddresses.list compute.globalForwardingRules.delete compute.globalForwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 2.15. Required permissions for deleting load balancer resources compute.backendServices.delete compute.backendServices.list compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list compute.targetTcpProxies.delete compute.targetTcpProxies.list Example 2.16. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 2.17. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 2.18. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 2.19. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 2.20. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list compute.regionHealthChecks.delete compute.regionHealthChecks.list Example 2.21. Required Images permissions for deletion compute.images.list 2.5.3. Required GCP permissions for shared VPC installations When you are installing a cluster to a shared VPC , you must configure the service account for both the host project and the service project. If you are not installing to a shared VPC, you can skip this section. You must apply the minimum roles required for a standard installation as listed above, to the service project. Important You can use granular permissions for a Cloud Credential Operator that operates in either manual or mint credentials mode. You cannot use granular permissions in passthrough credentials mode. Ensure that the host project applies one of the following configurations to the service account: Example 2.22. Required permissions for creating firewalls in the host project projects/<host-project>/roles/dns.networks.bindPrivateDNSZone roles/compute.networkAdmin roles/compute.securityAdmin Example 2.23. Required minimal permissions projects/<host-project>/roles/dns.networks.bindPrivateDNSZone roles/compute.networkUser 2.6. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: africa-south1 (Johannesburg, South Africa) asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-central2 (Dammam, Saudi Arabia, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 2.7. steps Install an OpenShift Container Platform cluster on GCP. You can install a customized cluster or quickly install a cluster with default options.
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_gcp/installing-gcp-account
Chapter 3. Red Hat build of OpenJDK features
Chapter 3. Red Hat build of OpenJDK features The latest Red Hat build of OpenJDK 11 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 11 releases. Note For all the other changes and security fixes, see OpenJDK 11.0.22 Released . Red Hat build of OpenJDK new features and enhancements Review the following release notes to understand new features and feature enhancements that Red Hat build of OpenJDK 11.0.22 provides: New JFR event jdk.SecurityProviderService Calls to the java.security.Provider.getService(String type, String algorithm) method now trigger a new JFR event, jdk.SecurityProviderService . The jdk.SecurityProviderService event contains the following three fields: Type: The type of service Algorithm: The algorithm name Provider: The security provider The jdk.SecurityProviderService event is disabled by default. You can enable this event by using the standard JFR configuration files and options. See JDK-8254711 (JDK Bug System) . Increased default value of jdk.jar.maxSignatureFileSize system property Red Hat build of OpenJDK 11.0.20 introduced a jdk.jar.maxSignatureFileSize system property for configuring the maximum number of bytes that are allowed for the signature-related files in a Java archive (JAR) file ( JDK-8300596 ). By default, the jdk.jar.maxSignatureFileSize property was set to 8000000 bytes (8 MB), which was too small for some JAR files, such as the Mend (formerly WhiteSource) Unified Agent JAR file. Red Hat build of OpenJDK 11.0.22 increases the default value of the jdk.jar.maxSignatureFileSize property to 16000000 bytes (16 MB). See JDK-8312489 (JDK Bug System) Telia Root CA v2 certificate added In Red Hat build of OpenJDK 11.0.22, the cacerts truststore includes the Telia Root certificate authority (CA) v2 certificate: Name: Telia Root CA v2 Alias name: teliarootcav2 Distinguished name: CN=Telia Root CA v2, O=Telia Finland Oyj, C=FI See JDK-8317373 (JDK Bug System) . Let's Encrypt ISRG Root X2 CA certificate added In Red Hat build of OpenJDK 11.0.22, the cacerts truststore includes the Internet Security Research Group (ISRG) Root X2 CA certificate from Let's Encrypt: Name: Let's Encrypt Alias name: letsencryptisrgx2 Distinguished name: CN=ISRG Root X2, O=Internet Security Research Group, C=US See JDK-8317374 (JDK Bug System) . Digicert, Inc. root certificates added In Red Hat build of OpenJDK 11.0.22, the cacerts truststore includes four Digicert, Inc. root certificates: Certificate 1 Name: DigiCert, Inc. Alias name: digicertcseccrootg5 Distinguished name: CN=DigiCert CS ECC P384 Root G5, O="DigiCert, Inc.", C=US Certificate 2 Name: DigiCert, Inc. Alias name: digicertcsrsarootg5 Distinguished name: CN=DigiCert CS RSA4096 Root G5, O="DigiCert, Inc.", C=US Certificate 3 Name: DigiCert, Inc. Alias name: digicerttlseccrootg5 Distinguished name: CN=DigiCert TLS ECC P384 Root G5, O="DigiCert, Inc.", C=US Certificate 4 Name: DigiCert, Inc. Alias name: digicerttlsrsarootg5 Distinguished name: CN=DigiCert TLS RSA4096 Root G5, O="DigiCert, Inc.", C=US See JDK-8318759 (JDK Bug System) . eMudhra Technologies Limited root certificates added In Red Hat build of OpenJDK 11.0.22, the cacerts truststore includes three eMudhra Technologies Limited root certificates: Certificate 1 Name: eMudhra Technologies Limited Alias name: emsignrootcag1 Distinguished name: CN=emSign Root CA - G1, O=eMudhra Technologies Limited, OU=emSign PKI, C=IN Certificate 2 Name: eMudhra Technologies Limited Alias name: emsigneccrootcag3 Distinguished name: CN=emSign ECC Root CA - G3, O=eMudhra Technologies Limited, OU=emSign PKI, C=IN Certificate 3 Name: eMudhra Technologies Limited Alias name: emsignrootcag2 Distinguished name: CN=emSign Root CA - G2, O=eMudhra Technologies Limited, OU=emSign PKI, C=IN See JDK-8319187 (JDK Bug System) .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.22/rn-openjdk11022-features_openjdk
Chapter 22. Network [operator.openshift.io/v1]
Chapter 22. Network [operator.openshift.io/v1] Description Network describes the cluster's desired network configuration. It is consumed by the cluster-network-operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 22.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NetworkSpec is the top-level network configuration object. status object NetworkStatus is detailed operator status, which is distilled up to the Network clusteroperator object. 22.1.1. .spec Description NetworkSpec is the top-level network configuration object. Type object Property Type Description additionalNetworks array additionalNetworks is a list of extra networks to make available to pods when multiple networks are enabled. additionalNetworks[] object AdditionalNetworkDefinition configures an extra network that is available but not created by default. Instead, pods must request them by name. type must be specified, along with exactly one "Config" that matches the type. clusterNetwork array clusterNetwork is the IP address pool to use for pod IPs. Some network providers support multiple ClusterNetworks. Others only support one. This is equivalent to the cluster-cidr. clusterNetwork[] object ClusterNetworkEntry is a subnet from which to allocate PodIPs. A network of size HostPrefix (in CIDR notation) will be allocated when nodes join the cluster. If the HostPrefix field is not used by the plugin, it can be left unset. Not all network providers support multiple ClusterNetworks defaultNetwork object defaultNetwork is the "default" network that all pods will receive deployKubeProxy boolean deployKubeProxy specifies whether or not a standalone kube-proxy should be deployed by the operator. Some network providers include kube-proxy or similar functionality. If unset, the plugin will attempt to select the correct value, which is false when ovn-kubernetes is used and true otherwise. disableMultiNetwork boolean disableMultiNetwork specifies whether or not multiple pod network support should be disabled. If unset, this property defaults to 'false' and multiple network support is enabled. disableNetworkDiagnostics boolean disableNetworkDiagnostics specifies whether or not PodNetworkConnectivityCheck CRs from a test pod to every node, apiserver and LB should be disabled or not. If unset, this property defaults to 'false' and network diagnostics is enabled. Setting this to 'true' would reduce the additional load of the pods performing the checks. exportNetworkFlows object exportNetworkFlows enables and configures the export of network flow metadata from the pod network by using protocols NetFlow, SFlow or IPFIX. Currently only supported on OVN-Kubernetes plugin. If unset, flows will not be exported to any collector. kubeProxyConfig object kubeProxyConfig lets us configure desired proxy configuration, if deployKubeProxy is true. If not specified, sensible defaults will be chosen by OpenShift directly. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component migration object migration enables and configures cluster network migration, for network changes that cannot be made instantly. observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". serviceNetwork array (string) serviceNetwork is the ip address pool to use for Service IPs Currently, all existing network providers only support a single value here, but this is an array to allow for growth. unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. useMultiNetworkPolicy boolean useMultiNetworkPolicy enables a controller which allows for MultiNetworkPolicy objects to be used on additional networks as created by Multus CNI. MultiNetworkPolicy are similar to NetworkPolicy objects, but NetworkPolicy objects only apply to the primary interface. With MultiNetworkPolicy, you can control the traffic that a pod can receive over the secondary interfaces. If unset, this property defaults to 'false' and MultiNetworkPolicy objects are ignored. If 'disableMultiNetwork' is 'true' then the value of this field is ignored. 22.1.2. .spec.additionalNetworks Description additionalNetworks is a list of extra networks to make available to pods when multiple networks are enabled. Type array 22.1.3. .spec.additionalNetworks[] Description AdditionalNetworkDefinition configures an extra network that is available but not created by default. Instead, pods must request them by name. type must be specified, along with exactly one "Config" that matches the type. Type object Required name Property Type Description name string name is the name of the network. This will be populated in the resulting CRD This must be unique. namespace string namespace is the namespace of the network. This will be populated in the resulting CRD If not given the network will be created in the default namespace. rawCNIConfig string rawCNIConfig is the raw CNI configuration json to create in the NetworkAttachmentDefinition CRD simpleMacvlanConfig object SimpleMacvlanConfig configures the macvlan interface in case of type:NetworkTypeSimpleMacvlan type string type is the type of network The supported values are NetworkTypeRaw, NetworkTypeSimpleMacvlan 22.1.4. .spec.additionalNetworks[].simpleMacvlanConfig Description SimpleMacvlanConfig configures the macvlan interface in case of type:NetworkTypeSimpleMacvlan Type object Property Type Description ipamConfig object IPAMConfig configures IPAM module will be used for IP Address Management (IPAM). master string master is the host interface to create the macvlan interface from. If not specified, it will be default route interface mode string mode is the macvlan mode: bridge, private, vepa, passthru. The default is bridge mtu integer mtu is the mtu to use for the macvlan interface. if unset, host's kernel will select the value. 22.1.5. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig Description IPAMConfig configures IPAM module will be used for IP Address Management (IPAM). Type object Property Type Description staticIPAMConfig object StaticIPAMConfig configures the static IP address in case of type:IPAMTypeStatic type string Type is the type of IPAM module will be used for IP Address Management(IPAM). The supported values are IPAMTypeDHCP, IPAMTypeStatic 22.1.6. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig.staticIPAMConfig Description StaticIPAMConfig configures the static IP address in case of type:IPAMTypeStatic Type object Property Type Description addresses array Addresses configures IP address for the interface addresses[] object StaticIPAMAddresses provides IP address and Gateway for static IPAM addresses dns object DNS configures DNS for the interface routes array Routes configures IP routes for the interface routes[] object StaticIPAMRoutes provides Destination/Gateway pairs for static IPAM routes 22.1.7. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig.staticIPAMConfig.addresses Description Addresses configures IP address for the interface Type array 22.1.8. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig.staticIPAMConfig.addresses[] Description StaticIPAMAddresses provides IP address and Gateway for static IPAM addresses Type object Property Type Description address string Address is the IP address in CIDR format gateway string Gateway is IP inside of subnet to designate as the gateway 22.1.9. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig.staticIPAMConfig.dns Description DNS configures DNS for the interface Type object Property Type Description domain string Domain configures the domainname the local domain used for short hostname lookups nameservers array (string) Nameservers points DNS servers for IP lookup search array (string) Search configures priority ordered search domains for short hostname lookups 22.1.10. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig.staticIPAMConfig.routes Description Routes configures IP routes for the interface Type array 22.1.11. .spec.additionalNetworks[].simpleMacvlanConfig.ipamConfig.staticIPAMConfig.routes[] Description StaticIPAMRoutes provides Destination/Gateway pairs for static IPAM routes Type object Property Type Description destination string Destination points the IP route destination gateway string Gateway is the route's -hop IP address If unset, a default gateway is assumed (as determined by the CNI plugin). 22.1.12. .spec.clusterNetwork Description clusterNetwork is the IP address pool to use for pod IPs. Some network providers support multiple ClusterNetworks. Others only support one. This is equivalent to the cluster-cidr. Type array 22.1.13. .spec.clusterNetwork[] Description ClusterNetworkEntry is a subnet from which to allocate PodIPs. A network of size HostPrefix (in CIDR notation) will be allocated when nodes join the cluster. If the HostPrefix field is not used by the plugin, it can be left unset. Not all network providers support multiple ClusterNetworks Type object Property Type Description cidr string hostPrefix integer 22.1.14. .spec.defaultNetwork Description defaultNetwork is the "default" network that all pods will receive Type object Property Type Description openshiftSDNConfig object openShiftSDNConfig was previously used to configure the openshift-sdn plugin. DEPRECATED: OpenShift SDN is no longer supported. ovnKubernetesConfig object ovnKubernetesConfig configures the ovn-kubernetes plugin. type string type is the type of network All NetworkTypes are supported except for NetworkTypeRaw 22.1.15. .spec.defaultNetwork.openshiftSDNConfig Description openShiftSDNConfig was previously used to configure the openshift-sdn plugin. DEPRECATED: OpenShift SDN is no longer supported. Type object Property Type Description enableUnidling boolean enableUnidling controls whether or not the service proxy will support idling and unidling of services. By default, unidling is enabled. mode string mode is one of "Multitenant", "Subnet", or "NetworkPolicy" mtu integer mtu is the mtu to use for the tunnel interface. Defaults to 1450 if unset. This must be 50 bytes smaller than the machine's uplink. useExternalOpenvswitch boolean useExternalOpenvswitch used to control whether the operator would deploy an OVS DaemonSet itself or expect someone else to start OVS. As of 4.6, OVS is always run as a system service, and this flag is ignored. vxlanPort integer vxlanPort is the port to use for all vxlan packets. The default is 4789. 22.1.16. .spec.defaultNetwork.ovnKubernetesConfig Description ovnKubernetesConfig configures the ovn-kubernetes plugin. Type object Property Type Description egressIPConfig object egressIPConfig holds the configuration for EgressIP options. gatewayConfig object gatewayConfig holds the configuration for node gateway options. genevePort integer geneve port is the UDP port to be used by geneve encapulation. Default is 6081 hybridOverlayConfig object HybridOverlayConfig configures an additional overlay network for peers that are not using OVN. ipsecConfig object ipsecConfig enables and configures IPsec for pods on the pod network within the cluster. ipv4 object ipv4 allows users to configure IP settings for IPv4 connections. When ommitted, this means no opinions and the default configuration is used. Check individual fields within ipv4 for details of default values. ipv6 object ipv6 allows users to configure IP settings for IPv6 connections. When ommitted, this means no opinions and the default configuration is used. Check individual fields within ipv4 for details of default values. mtu integer mtu is the MTU to use for the tunnel interface. This must be 100 bytes smaller than the uplink mtu. Default is 1400 policyAuditConfig object policyAuditConfig is the configuration for network policy audit events. If unset, reported defaults are used. v4InternalSubnet string v4InternalSubnet is a v4 subnet used internally by ovn-kubernetes in case the default one is being already used by something else. It must not overlap with any other subnet being used by OpenShift or by the node network. The size of the subnet must be larger than the number of nodes. The value cannot be changed after installation. Default is 100.64.0.0/16 v6InternalSubnet string v6InternalSubnet is a v6 subnet used internally by ovn-kubernetes in case the default one is being already used by something else. It must not overlap with any other subnet being used by OpenShift or by the node network. The size of the subnet must be larger than the number of nodes. The value cannot be changed after installation. Default is fd98::/48 22.1.17. .spec.defaultNetwork.ovnKubernetesConfig.egressIPConfig Description egressIPConfig holds the configuration for EgressIP options. Type object Property Type Description reachabilityTotalTimeoutSeconds integer reachabilityTotalTimeout configures the EgressIP node reachability check total timeout in seconds. If the EgressIP node cannot be reached within this timeout, the node is declared down. Setting a large value may cause the EgressIP feature to react slowly to node changes. In particular, it may react slowly for EgressIP nodes that really have a genuine problem and are unreachable. When omitted, this means the user has no opinion and the platform is left to choose a reasonable default, which is subject to change over time. The current default is 1 second. A value of 0 disables the EgressIP node's reachability check. 22.1.18. .spec.defaultNetwork.ovnKubernetesConfig.gatewayConfig Description gatewayConfig holds the configuration for node gateway options. Type object Property Type Description ipForwarding string IPForwarding controls IP forwarding for all traffic on OVN-Kubernetes managed interfaces (such as br-ex). By default this is set to Restricted, and Kubernetes related traffic is still forwarded appropriately, but other IP traffic will not be routed by the OCP node. If there is a desire to allow the host to forward traffic across OVN-Kubernetes managed interfaces, then set this field to "Global". The supported values are "Restricted" and "Global". ipv4 object ipv4 allows users to configure IP settings for IPv4 connections. When omitted, this means no opinion and the default configuration is used. Check individual members fields within ipv4 for details of default values. ipv6 object ipv6 allows users to configure IP settings for IPv6 connections. When omitted, this means no opinion and the default configuration is used. Check individual members fields within ipv6 for details of default values. routingViaHost boolean RoutingViaHost allows pod egress traffic to exit via the ovn-k8s-mp0 management port into the host before sending it out. If this is not set, traffic will always egress directly from OVN to outside without touching the host stack. Setting this to true means hardware offload will not be supported. Default is false if GatewayConfig is specified. 22.1.19. .spec.defaultNetwork.ovnKubernetesConfig.gatewayConfig.ipv4 Description ipv4 allows users to configure IP settings for IPv4 connections. When omitted, this means no opinion and the default configuration is used. Check individual members fields within ipv4 for details of default values. Type object Property Type Description internalMasqueradeSubnet string internalMasqueradeSubnet contains the masquerade addresses in IPV4 CIDR format used internally by ovn-kubernetes to enable host to service traffic. Each host in the cluster is configured with these addresses, as well as the shared gateway bridge interface. The values can be changed after installation. The subnet chosen should not overlap with other networks specified for OVN-Kubernetes as well as other networks used on the host. Additionally the subnet must be large enough to accommodate 6 IPs (maximum prefix length /29). When omitted, this means no opinion and the platform is left to choose a reasonable default which is subject to change over time. The current default subnet is 169.254.169.0/29 The value must be in proper IPV4 CIDR format 22.1.20. .spec.defaultNetwork.ovnKubernetesConfig.gatewayConfig.ipv6 Description ipv6 allows users to configure IP settings for IPv6 connections. When omitted, this means no opinion and the default configuration is used. Check individual members fields within ipv6 for details of default values. Type object Property Type Description internalMasqueradeSubnet string internalMasqueradeSubnet contains the masquerade addresses in IPV6 CIDR format used internally by ovn-kubernetes to enable host to service traffic. Each host in the cluster is configured with these addresses, as well as the shared gateway bridge interface. The values can be changed after installation. The subnet chosen should not overlap with other networks specified for OVN-Kubernetes as well as other networks used on the host. Additionally the subnet must be large enough to accommodate 6 IPs (maximum prefix length /125). When omitted, this means no opinion and the platform is left to choose a reasonable default which is subject to change over time. The current default subnet is fd69::/125 Note that IPV6 dual addresses are not permitted 22.1.21. .spec.defaultNetwork.ovnKubernetesConfig.hybridOverlayConfig Description HybridOverlayConfig configures an additional overlay network for peers that are not using OVN. Type object Property Type Description hybridClusterNetwork array HybridClusterNetwork defines a network space given to nodes on an additional overlay network. hybridClusterNetwork[] object ClusterNetworkEntry is a subnet from which to allocate PodIPs. A network of size HostPrefix (in CIDR notation) will be allocated when nodes join the cluster. If the HostPrefix field is not used by the plugin, it can be left unset. Not all network providers support multiple ClusterNetworks hybridOverlayVXLANPort integer HybridOverlayVXLANPort defines the VXLAN port number to be used by the additional overlay network. Default is 4789 22.1.22. .spec.defaultNetwork.ovnKubernetesConfig.hybridOverlayConfig.hybridClusterNetwork Description HybridClusterNetwork defines a network space given to nodes on an additional overlay network. Type array 22.1.23. .spec.defaultNetwork.ovnKubernetesConfig.hybridOverlayConfig.hybridClusterNetwork[] Description ClusterNetworkEntry is a subnet from which to allocate PodIPs. A network of size HostPrefix (in CIDR notation) will be allocated when nodes join the cluster. If the HostPrefix field is not used by the plugin, it can be left unset. Not all network providers support multiple ClusterNetworks Type object Property Type Description cidr string hostPrefix integer 22.1.24. .spec.defaultNetwork.ovnKubernetesConfig.ipsecConfig Description ipsecConfig enables and configures IPsec for pods on the pod network within the cluster. Type object Property Type Description mode string mode defines the behaviour of the ipsec configuration within the platform. Valid values are Disabled , External and Full . When 'Disabled', ipsec will not be enabled at the node level. When 'External', ipsec is enabled on the node level but requires the user to configure the secure communication parameters. This mode is for external secure communications and the configuration can be done using the k8s-nmstate operator. When 'Full', ipsec is configured on the node level and inter-pod secure communication within the cluster is configured. Note with Full , if ipsec is desired for communication with external (to the cluster) entities (such as storage arrays), this is left to the user to configure. 22.1.25. .spec.defaultNetwork.ovnKubernetesConfig.ipv4 Description ipv4 allows users to configure IP settings for IPv4 connections. When ommitted, this means no opinions and the default configuration is used. Check individual fields within ipv4 for details of default values. Type object Property Type Description internalJoinSubnet string internalJoinSubnet is a v4 subnet used internally by ovn-kubernetes in case the default one is being already used by something else. It must not overlap with any other subnet being used by OpenShift or by the node network. The size of the subnet must be larger than the number of nodes. The value cannot be changed after installation. The current default value is 100.64.0.0/16 The subnet must be large enough to accomadate one IP per node in your cluster The value must be in proper IPV4 CIDR format internalTransitSwitchSubnet string internalTransitSwitchSubnet is a v4 subnet in IPV4 CIDR format used internally by OVN-Kubernetes for the distributed transit switch in the OVN Interconnect architecture that connects the cluster routers on each node together to enable east west traffic. The subnet chosen should not overlap with other networks specified for OVN-Kubernetes as well as other networks used on the host. The value cannot be changed after installation. When ommitted, this means no opinion and the platform is left to choose a reasonable default which is subject to change over time. The current default subnet is 100.88.0.0/16 The subnet must be large enough to accomadate one IP per node in your cluster The value must be in proper IPV4 CIDR format 22.1.26. .spec.defaultNetwork.ovnKubernetesConfig.ipv6 Description ipv6 allows users to configure IP settings for IPv6 connections. When ommitted, this means no opinions and the default configuration is used. Check individual fields within ipv4 for details of default values. Type object Property Type Description internalJoinSubnet string internalJoinSubnet is a v6 subnet used internally by ovn-kubernetes in case the default one is being already used by something else. It must not overlap with any other subnet being used by OpenShift or by the node network. The size of the subnet must be larger than the number of nodes. The value cannot be changed after installation. The subnet must be large enough to accomadate one IP per node in your cluster The current default value is fd98::/48 The value must be in proper IPV6 CIDR format Note that IPV6 dual addresses are not permitted internalTransitSwitchSubnet string internalTransitSwitchSubnet is a v4 subnet in IPV4 CIDR format used internally by OVN-Kubernetes for the distributed transit switch in the OVN Interconnect architecture that connects the cluster routers on each node together to enable east west traffic. The subnet chosen should not overlap with other networks specified for OVN-Kubernetes as well as other networks used on the host. The value cannot be changed after installation. When ommitted, this means no opinion and the platform is left to choose a reasonable default which is subject to change over time. The subnet must be large enough to accomadate one IP per node in your cluster The current default subnet is fd97::/64 The value must be in proper IPV6 CIDR format Note that IPV6 dual addresses are not permitted 22.1.27. .spec.defaultNetwork.ovnKubernetesConfig.policyAuditConfig Description policyAuditConfig is the configuration for network policy audit events. If unset, reported defaults are used. Type object Property Type Description destination string destination is the location for policy log messages. Regardless of this config, persistent logs will always be dumped to the host at /var/log/ovn/ however Additionally syslog output may be configured as follows. Valid values are: - "libc" to use the libc syslog() function of the host node's journdald process - "udp:host:port" for sending syslog over UDP - "unix:file" for using the UNIX domain socket directly - "null" to discard all messages logged to syslog The default is "null" maxFileSize integer maxFilesSize is the max size an ACL_audit log file is allowed to reach before rotation occurs Units are in MB and the Default is 50MB maxLogFiles integer maxLogFiles specifies the maximum number of ACL_audit log files that can be present. rateLimit integer rateLimit is the approximate maximum number of messages to generate per-second per-node. If unset the default of 20 msg/sec is used. syslogFacility string syslogFacility the RFC5424 facility for generated messages, e.g. "kern". Default is "local0" 22.1.28. .spec.exportNetworkFlows Description exportNetworkFlows enables and configures the export of network flow metadata from the pod network by using protocols NetFlow, SFlow or IPFIX. Currently only supported on OVN-Kubernetes plugin. If unset, flows will not be exported to any collector. Type object Property Type Description ipfix object ipfix defines IPFIX configuration. netFlow object netFlow defines the NetFlow configuration. sFlow object sFlow defines the SFlow configuration. 22.1.29. .spec.exportNetworkFlows.ipfix Description ipfix defines IPFIX configuration. Type object Property Type Description collectors array (string) ipfixCollectors is list of strings formatted as ip:port with a maximum of ten items 22.1.30. .spec.exportNetworkFlows.netFlow Description netFlow defines the NetFlow configuration. Type object Property Type Description collectors array (string) netFlow defines the NetFlow collectors that will consume the flow data exported from OVS. It is a list of strings formatted as ip:port with a maximum of ten items 22.1.31. .spec.exportNetworkFlows.sFlow Description sFlow defines the SFlow configuration. Type object Property Type Description collectors array (string) sFlowCollectors is list of strings formatted as ip:port with a maximum of ten items 22.1.32. .spec.kubeProxyConfig Description kubeProxyConfig lets us configure desired proxy configuration, if deployKubeProxy is true. If not specified, sensible defaults will be chosen by OpenShift directly. Type object Property Type Description bindAddress string The address to "bind" on Defaults to 0.0.0.0 iptablesSyncPeriod string An internal kube-proxy parameter. In older releases of OCP, this sometimes needed to be adjusted in large clusters for performance reasons, but this is no longer necessary, and there is no reason to change this from the default value. Default: 30s proxyArguments object Any additional arguments to pass to the kubeproxy process proxyArguments{} array (string) ProxyArgumentList is a list of arguments to pass to the kubeproxy process 22.1.33. .spec.kubeProxyConfig.proxyArguments Description Any additional arguments to pass to the kubeproxy process Type object 22.1.34. .spec.migration Description migration enables and configures cluster network migration, for network changes that cannot be made instantly. Type object Property Type Description features object features was previously used to configure which network plugin features would be migrated in a network type migration. DEPRECATED: network type migration is no longer supported, and setting this to a non-empty value will result in the network operator rejecting the configuration. mode string mode indicates the mode of network type migration. DEPRECATED: network type migration is no longer supported, and setting this to a non-empty value will result in the network operator rejecting the configuration. mtu object mtu contains the MTU migration configuration. Set this to allow changing the MTU values for the default network. If unset, the operation of changing the MTU for the default network will be rejected. networkType string networkType was previously used when changing the default network type. DEPRECATED: network type migration is no longer supported, and setting this to a non-empty value will result in the network operator rejecting the configuration. 22.1.35. .spec.migration.features Description features was previously used to configure which network plugin features would be migrated in a network type migration. DEPRECATED: network type migration is no longer supported, and setting this to a non-empty value will result in the network operator rejecting the configuration. Type object Property Type Description egressFirewall boolean egressFirewall specified whether or not the Egress Firewall configuration was migrated. DEPRECATED: network type migration is no longer supported. egressIP boolean egressIP specified whether or not the Egress IP configuration was migrated. DEPRECATED: network type migration is no longer supported. multicast boolean multicast specified whether or not the multicast configuration was migrated. DEPRECATED: network type migration is no longer supported. 22.1.36. .spec.migration.mtu Description mtu contains the MTU migration configuration. Set this to allow changing the MTU values for the default network. If unset, the operation of changing the MTU for the default network will be rejected. Type object Property Type Description machine object machine contains MTU migration configuration for the machine's uplink. Needs to be migrated along with the default network MTU unless the current uplink MTU already accommodates the default network MTU. network object network contains information about MTU migration for the default network. Migrations are only allowed to MTU values lower than the machine's uplink MTU by the minimum appropriate offset. 22.1.37. .spec.migration.mtu.machine Description machine contains MTU migration configuration for the machine's uplink. Needs to be migrated along with the default network MTU unless the current uplink MTU already accommodates the default network MTU. Type object Property Type Description from integer from is the MTU to migrate from. to integer to is the MTU to migrate to. 22.1.38. .spec.migration.mtu.network Description network contains information about MTU migration for the default network. Migrations are only allowed to MTU values lower than the machine's uplink MTU by the minimum appropriate offset. Type object Property Type Description from integer from is the MTU to migrate from. to integer to is the MTU to migrate to. 22.1.39. .status Description NetworkStatus is detailed operator status, which is distilled up to the Network clusteroperator object. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 22.1.40. .status.conditions Description conditions is a list of conditions and their status Type array 22.1.41. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Required type Property Type Description lastTransitionTime string message string reason string status string type string 22.1.42. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 22.1.43. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 22.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/networks DELETE : delete collection of Network GET : list objects of kind Network POST : create a Network /apis/operator.openshift.io/v1/networks/{name} DELETE : delete a Network GET : read the specified Network PATCH : partially update the specified Network PUT : replace the specified Network 22.2.1. /apis/operator.openshift.io/v1/networks HTTP method DELETE Description delete collection of Network Table 22.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Network Table 22.2. HTTP responses HTTP code Reponse body 200 - OK NetworkList schema 401 - Unauthorized Empty HTTP method POST Description create a Network Table 22.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.4. Body parameters Parameter Type Description body Network schema Table 22.5. HTTP responses HTTP code Reponse body 200 - OK Network schema 201 - Created Network schema 202 - Accepted Network schema 401 - Unauthorized Empty 22.2.2. /apis/operator.openshift.io/v1/networks/{name} Table 22.6. Global path parameters Parameter Type Description name string name of the Network HTTP method DELETE Description delete a Network Table 22.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 22.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Network Table 22.9. HTTP responses HTTP code Reponse body 200 - OK Network schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Network Table 22.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.11. HTTP responses HTTP code Reponse body 200 - OK Network schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Network Table 22.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 22.13. Body parameters Parameter Type Description body Network schema Table 22.14. HTTP responses HTTP code Reponse body 200 - OK Network schema 201 - Created Network schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operator_apis/network-operator-openshift-io-v1
5.3. The IdM Command-Line Utilities
5.3. The IdM Command-Line Utilities The basic command-line script for IdM is named ipa . The ipa script is a parent script for a number of subcommands. These subcommands are then used to manage IdM. For example, the ipa user-add command adds a new user: Command-line management has certain benefits over management in UI; for example, the command-line utilities allow management tasks to be automated and performed repeatedly in a consistent way without manual intervention. Additionally, while most management operations are available both from the command line and in the web UI, some tasks can only be performed from the command line. Note This section only provides a general overview of the ipa subcommands. More information is available in the other sections dedicated to specific areas of managing IdM. For example, for information about managing user entries using the ipa subcommands, see Chapter 11, Managing User Accounts . 5.3.1. Getting Help for ipa Commands The ipa script can display help about a particular set of subcommands: a topic . To display the list of available topics, use the ipa help topics command: To display help for a particular topic, use the ipa help topic_name command. For example, to display information about the automember topic: The ipa script can also display a list of available ipa commands. To do this, use the ipa help commands command: For detailed help on the individual ipa commands, add the --help option to a command. For example: For more information about the ipa utility, see the ipa (1) man page. 5.3.2. Setting a List of Values IdM stores entry attributes in lists. For example: Any update to a list of attributes overwrites the list. For example, an attempt to add a single attribute by only specifying this attribute replaces the whole previously-defined list with the single new attribute. Therefore, when changing a list of attributes, you must specify the whole updated list. IdM supports the following methods of supplying a list of attributes: Using the same command-line argument multiple times within the same command invocation. For example: Enclosing the list in curly braces, which allows the shell to do the expansion. For example: 5.3.3. Using Special Characters When passing command-line arguments in ipa commands that include special characters, such as angle brackets (< and >), ampersand (&), asterisk (*), or vertical bar (|), you must escape these characters by using a backslash (\). For example, to escape an asterisk (*): Commands containing unescaped special characters do not work as expected because the shell cannot properly parse such characters. 5.3.4. Searching IdM Entries Listing IdM Entries Use the ipa *-find commands to search for a particular type of IdM entries. For example: To list all users: To list user groups whose specified attributes contain keyword : To configure the attributes IdM searches for users and user groups, see Section 13.5, "Setting Search Attributes for Users and User Groups" . When searching user groups, you can also limit the search results to groups that contain a particular user: You can also search for groups that do not contain a particular user: Showing Details for a Particular Entry Use the ipa *-show command to display details about a particular IdM entry. For example: 5.3.4.1. Adjusting the Search Size and Time Limit Some search results, such as viewing lists of users, can return a very large number of entries. By tuning these search operations, you can improve overall server performance when running the ipa *-find commands, such as ipa user-find , and when displaying corresponding lists in the web UI. The search size limit: Defines the maximum number of entries returned for a request sent to the server from a client, the IdM command-line tools, or the IdM web UI. Default value: 100 entries. The search time limit: Defines the maximum time that the server waits for searches to run. Once the search reaches this limit, the server stops the search and returns the entries that discovered in that time. Default value: 2 seconds. If you set the values to -1 , IdM will not apply any limits when searching. Important Setting search size or time limits too high can negatively affect server performance. Web UI: Adjusting the Search Size and Time Limit To adjust the limits globally for all queries: Select IPA Server Configuration . Set the required values in the Search Options area. Click Save at the top of the page. Command Line: Adjusting the Search Size and Time Limit To adjust the limits globally for all queries, use the ipa config-mod command and add the --searchrecordslimit and --searchtimelimit options. For example: From the command line, you can also adjust the limits only for a specific query. To do this, add the --sizelimit or --timelimit options to the command. For example: Important Note that adjusting the size or time limits using the ipa config-mod command with the --searchrecordslimit or the --searchtimelimit options affects the number of entries returned by ipa commands, such as ipa user-find . In addition to these limits, the settings configured at the Directory Server level are also taken into account and may impose stricter limits. For more information on Directory Server limits, see the Red Hat Directory Server Administration Guide .
[ "ipa user-add user_name", "ipa help topics automember Auto Membership Rule. automount Automount caacl Manage CA ACL rules.", "ipa help automember Auto Membership Rule. Bring clarity to the membership of hosts and users by configuring inclusive or exclusive regex patterns, you can automatically assign a new entries into a group or hostgroup based upon attribute information. EXAMPLES: Add the initial group or hostgroup: ipa hostgroup-add --desc=\"Web Servers\" webservers ipa group-add --desc=\"Developers\" devel", "ipa help commands automember-add Add an automember rule. automember-add-condition Add conditions to an automember rule.", "ipa automember-add --help Usage: ipa [global-options] automember-add AUTOMEMBER-RULE [options] Add an automember rule. Options: -h, --help show this help message and exit --desc=STR A description of this auto member rule", "ipaUserSearchFields: uid,givenname,sn,telephonenumber,ou,title", "ipa permission-add --permissions=read --permissions=write --permissions=delete", "ipa permission-add --permissions={read,write,delete}", "ipa certprofile-show certificate_profile --out= exported\\*profile.cfg", "ipa user-find --------------- 4 users matched ---------------", "ipa group-find keyword ---------------- 2 groups matched ----------------", "ipa group-find --user= user_name", "ipa group-find --no-user= user_name", "ipa host-show server.example.com Host name: server.example.com Principal name: host/[email protected]", "ipa config-mod --searchrecordslimit=500 --searchtimelimit=5", "ipa user-find --sizelimit=200 --timelimit=120" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-idm-cli
Chapter 4. Knative CLI for use with OpenShift Serverless
Chapter 4. Knative CLI for use with OpenShift Serverless The Knative ( kn ) CLI enables simple interaction with Knative components on OpenShift Container Platform. 4.1. Key features The Knative ( kn ) CLI is designed to make serverless computing tasks simple and concise. Key features of the Knative CLI include: Deploy serverless applications from the command line. Manage features of Knative Serving, such as services, revisions, and traffic-splitting. Create and manage Knative Eventing components, such as event sources and triggers. Create sink bindings to connect existing Kubernetes applications and Knative services. Extend the Knative CLI with flexible plugin architecture, similar to the kubectl CLI. Configure autoscaling parameters for Knative services. Scripted usage, such as waiting for the results of an operation, or deploying custom rollout and rollback strategies. 4.2. Installing the Knative CLI See Installing the Knative CLI .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/cli_tools/kn-cli-tools
Chapter 2. Accessing the Fuse Console
Chapter 2. Accessing the Fuse Console Follow these steps to access the Fuse Console for Red Hat JBoss Enterprise Application Platform. Prerequisite You must install Fuse on the JBoss EAP container. For step-by-step instructions, see Installing on JBoss EAP . Procedure To access the Fuse Console for a standalone JBoss EAP distribution: Start Red Hat Fuse standalone with the following command: On Linux/Mac OS: ./bin/standalone.sh On Windows: ./bin/standalone.bat In a web browser, enter the URL to connect to the Fuse Console. For example: http://localhost:8080/hawtio In the login page, enter your user name and password and then click Log In . By default, the Fuse Console shows the Home page. The left navigation tabs indicate the running plugins. Note If the main Fuse Console page takes a long time to display in the browser, you might need to reduce the number and the size of the log files. You can use the periodic-size-rotating-file-handler to rotate the file when it reaches a maximum size (rotate-size) and maintains a number of files (max-backup-index). For details on how to use this handler, see the Red Hat JBoss Enterprise Application Platform product documentation.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_jboss_eap_standalone/fuse-console-access-eap
15.5.6. File Transfer Options
15.5.6. File Transfer Options The following lists directives which affect directories. download_enable - When enabled, file downloads are permitted. The default value is YES . chown_uploads - When enabled, all files uploaded by anonymous users are owned by the user specified in the chown_username directive. The default value is NO . chown_username - Specifies the ownership of anonymously uploaded files if the chown_uploads directive is enabled. The default value is root . write_enable - When enabled, FTP commands which can change the file system are allowed, such as DELE , RNFR , and STOR . The default value is YES .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-ftp-vsftpd-conf-opt-file
7.7. Remote Event Listeners (Hot Rod)
7.7. Remote Event Listeners (Hot Rod) Event listeners allow Red Hat JBoss Data Grid Hot Rod servers to be able to notify remote clients of events such as CacheEntryCreated , CacheEntryModified , CacheEntryExpired and CacheEntryRemoved . Clients can choose whether or not to listen to these events to avoid flooding connected clients. This assumes that clients maintain persistent connections to the servers. Client listeners for remote events can be added similarly to clustered listeners in library mode. The following example demonstrates a remote client listener that prints out each event it receives. Example 7.7. Event Print Listener ClientCacheEntryCreatedEvent and ClientCacheEntryModifiedEvent instances provide information on the key and version of the entry. This version can be used to invoke conditional operations on the server, such a replaceWithVersion or removeWithVersion . ClientCacheEntryExpiredEvent events are sent when either a get() is called on an expired entry, or when the expiration reaper detects that an entry has expired. Once the entry has expired the cache will nullify the entry, and adjust its size appropriately; however, the event will only be generated in the two scenarios listed. ClientCacheEntryRemovedEvent events are only sent when the remove operation succeeds. If a remove operation is invoked and no entry is found or there are no entries to remove, no event is generated. If users require remove events regardless of whether or not they are successful, a customized event logic can be created. All client cache entry created, modified, and removed events provide a boolean isCommandRetried() method that will return true if the write command that caused it has to be retried due to a topology change. This indicates that the event has been duplicated or that another event was dropped and replaced, such as where a Modified event replaced a Created event. Important If the expected workload favors writes over reads it will be necessary to filter the events sent to prevent a large amount of excessive traffic being generated which may cause issues on either the client or the network. For more details on filtering events refer to Section 7.7.3, "Filtering Remote Events" . Important Remote event listeners are available for the Hot Rod Java client only. Report a bug 7.7.1. Adding and Removing Event Listeners Registering and Event Listener with the Server The following example registers the Event Print Listener with the server. See Example 7.7, "Event Print Listener" . Example 7.8. Adding an Event Listener Removing a Client Event Listener A client event listener can be removed as follows Example 7.9. Removing an Event Listener Report a bug 7.7.2. Remote Event Client Listener Example The following procedure demonstrates the steps required to configure a remote client listener to interact with the remote cache via Hot Rod. Procedure 7.2. Configuring Remote Event Listeners Download the Red Hat JBoss Data Grid Server distribution from the Red Hat Customer Portal The latest Red Hat JBoss Data Grid distribution includes the Hot Rod server with which the client will communicate. Start the server Start the JBoss Data Grid server by using the following command from the root of the server. Write an application to interact with the Hot Rod server Maven users Create an application with the following dependency, changing the version to 6.3.0-Final-redhat-1 or better. Non-Maven users, adjust according to your chosen build tool or download the distribution containing all JBoss Data Grid jars. Write the client application The following demonstrates a simple remote event listener that logs all events received. Use the remote event listener to execute operations against the remote cache The following example demonstrates a simple main java class, which adds the remote event listener and executes some operations against the remote cache. Result Once executed, the console output should appear similar to the following: The output indicates that by default, events come with the key and the internal data version associated with current value. The actual value is not sent back to the client for performance reasons. Receiving remote events has a performance impact, which is increased with cache size, as more operations are executed. To avoid inundating Hot Rod clients, filter remote events on the server side, or customize the event contents. Report a bug 7.7.3. Filtering Remote Events To prevent clients being inundated with events, Red Hat JBoss Data Grid Hot Rod remote events can be filtered by providing key/value filter factories that create instances that filter which events are sent to clients, and how these filters can act on client provided information. Sending events to remote clients has a performance cost, which increases with the number of clients with registered remote listeners. The performance impact also increases with the number of modifications that are executed against the cache. The performance cost can be reduced by filtering the events being sent on the server side. Custom code can be used to exclude certain events from being broadcast to the remote clients to improve performance. Filtering can be based on either key or value information, or based on cache entry metadata. To enable filtering, a cache event filter factory that produces filter instances must be created. The following is a sample implementation that filters key "2" out of the events sent to clients. Example 7.10. KeyValueFilter In order to register a listener with this key value filter factory, the factory must be given a unique name, and the Hot Rod server must be plugged with the name and the cache event filter factory instance. Report a bug 7.7.3.1. Custom Filters for Remote Events Custom filters can improve performance by excluding certain event information from being broadcast to the remote clients. To plug the JBoss Data Grid Server with a custom filter use the following procedure: Procedure 7.3. Using a Custom Filter Create a JAR file with the filter implementation within it. Each factory must have a name assigned to it via the org.infinispan.filter.NamedFactory annotation. The example uses a KeyValueFilterFactory . Create a META-INF/services/org.infinispan.notifications.cachelistener.filter. CacheEventFilterFactory file within the JAR file, and within it write the fully qualified class name of the filter class implementation. Deploy the JAR file in the JBoss Data Grid Server by performing any of the following options: Procedure 7.4. Option 1: Deploy the JAR through the deployment scanner. Copy the JAR to the USDJDG_HOME/standalone/deployments/ directory. The deployment scanner actively monitors this directory and will deploy the newly placed file. Procedure 7.5. Option 2: Deploy the JAR through the CLI Connect to the desired instance with the CLI: Once connected execute the deploy command: Procedure 7.6. Option 3: Deploy the JAR as a custom module Connect to the JDG server by running the below command: The jar containing the Custom Filter must be defined as a module for the Server; to add this substitute the desired name of the module and the .jar name in the below command, adding additional dependencies as necessary for the Custom Filter: In a different window add the newly added module as a dependency to the org.infinispan module by editing USDJDG_HOME/modules/system/layers/base/org/infinispan/main/module.xml . In this file add the following entry: Restart the JDG server. Once the server is plugged with the filter, add a remote client listener that will use the filter. The following example extends the EventLogListener implementation provided in Remote Event Client Listener Example (See Section 7.7.2, "Remote Event Client Listener Example" ), and overrides the @ClientListener annotation to indicate the filter factory to use with the listener. Example 7.11. Add Filter Factory to the Listener The listener can now be added via the RemoteCacheAPI. The following example demonstrates this, and executes some operations against the remote cache. Example 7.12. Register the Listener with the Server The system output shows that the client receives events for all keys except those that have been filtered. Result The following demonstrates the resulting system output from the provided example. Important Filter instances must be marshallable when they are deployed in a cluster in order for filtering to occur where the event is generated, even if the event is generated in a different node to where the listener is registered. To make them marshallable, either make them extend Serializable, Externalizable, or provide a custom Externalizer. Report a bug 7.7.3.2. Enhanced Filter Factories When adding client listeners, users can provide parameters to the filter factory in order to generate different filter instances with different behaviors from a single filter factory based on client-side information. The following configuration demonstrates how to enhance the filter factory so that it can filter dynamically based on the key provided when adding the listener, rather than filtering on a statically given key. Example 7.13. Configuring an Enhanced Filter Factory The filter can now filter by "3" instead of "2": Example 7.14. Running an Enhanced Filter Factory Result The provided example results in the following output: The amount of information sent to clients can be further reduced or increased by customizing remote events. Report a bug 7.7.4. Customizing Remote Events In Red Hat JBoss Data Grid, Hot Rod remote events can be customized to contain the information required to be sent to a client. By default, events contain only a basic set of information, such as a key and type of event, in order to avoid overloading the client, and to reduce the cost of sending them. The information included in these events can be customized to contain more information, such as values, or contain even less information. Customization is done via CacheEventConverter instances, which are created by implementing a CacheEventConverterFactory class. Each factory must have a name associated to it via the @NamedFactory annotation. To plug the JBoss Data Grid Server with an event converter use the following procedure: Procedure 7.7. Using a Converter Create a JAR file with the converter implementation within it. Each factory must have a name assigned to it via the org.infinispan.filter.NamedFactory annotation. Create a META-INF/services/org.infinispan.notifications.cachelistener.filter.CacheEventConverterFactory file within the JAR file and within it, write the fully qualified class name of the converter class implementation. Deploy the JAR file in the JBoss Data Grid Server by performing any of the following options: Procedure 7.8. Option 1: Deploy the JAR through the deployment scanner. Copy the JAR to the USDJDG_HOME/standalone/deployments/ directory. The deployment scanner actively monitors this directory and will deploy the newly placed file. Procedure 7.9. Option 2: Deploy the JAR through the CLI Connect to the desired instance with the CLI: Once connected execute the deploy command: Procedure 7.10. Option 3: Deploy the JAR as a custom module Connect to the JDG server by running the below command: The jar containing the Custom Converter must be defined as a module for the Server; to add this substitute the desired name of the module and the .jar name in the below command, adding additional dependencies as necessary for the Custom Converter: In a different window add the newly added module as a dependency to the org.infinispan module by editing USDJDG_HOME/modules/system/layers/base/org/infinispan/main/module.xml . In this file add the following entry: Restart the JDG server. Converters can also act on client provided information, allowing converter instances to customize events based on the information provided when the listener was added. The API allows converter parameters to be passed in when the listener is added. Report a bug 7.7.4.1. Adding a Converter When a listener is added, the name of a converter factory can be provided to use with the listener. When the listener is added, the server looks up the factory and invokes the getConverter method to get a org.infinispan.filter.Converter class instance to customize events server side. The following example demonstrates sending custom events containing value information to remote clients for a cache of Integers and Strings. The converter generates a new custom event, which includes the value as well as the key in the event. The custom event has a bigger event payload compared with default events, however if combined with filtering, it can reduce bandwidth cost. Example 7.15. Sending Custom Events Report a bug 7.7.4.2. Lightweight Events Other converter implementations are able to send back events that contain no key or event type information, resulting in extremely lightweight events at the expense of having rich information provided by the event. In order to plug the server with this converter, deploy the converter factory and associated converter class within a JAR file including a service definition inside the META-INF/services/org.infinispan.notifications.cachelistener.filter.CacheEventConverterFactory file as follows: The client listener must then be linked with the converter factory by adding the factory name to the @ClientListener annotation. Report a bug 7.7.4.3. Dynamic Converter Instances Dynamic converter instances convert based on parameters provided when the listener is registered. Converters use the parameters received by the converter factories to enable this option. For example: Example 7.16. Dynamic Converter The dynamic parameters required to do the conversion are provided when the listener is registered: Report a bug 7.7.4.4. Adding a Remote Client Listener for Custom Events Implementing a listener for custom events is slightly different to other remote events, as they involve non-default events. The same annotations are used as in other remote client listener implementations, but the callbacks receive instances of ClientCacheEntryCustomEvent<T> , where T is the type of custom event we are sending from the server. For example: Example 7.17. Custom Event Listener Implementation To use the remote event listener to execute operations against the remote cache, write a simple main Java class, which adds the remote event listener and executes some operations against the remote cache. For example: Example 7.18. Execute Operations against the Remote Cache Result Once executed, the console output should appear similar to the following: Important Converter instances must be marshallable when they are deployed in a cluster in order for conversion to occur where the event is generated, even if the event is generated in a different node to where the listener is registered. To make them marshallable, either make them extend Serializable, Externalizable, or provide a custom Externalizer for them. Both client and server need to be aware of any custom event type and be able to marshall it in order to facilitate both server and client writing against type safe APIs. On the client side, this is done by an optional marshaller configurable via the RemoteCacheManager. On the server side, this is done by a marshaller added to the Hot Rod server configuration. Report a bug 7.7.5. Event Marshalling When filtering or customizing events, the KeyValueFilter and Converter instances must be marshallable. As the client listener is installed in a cluster, the filter and/or converter instances are sent to other nodes in the cluster in order for filtering and conversion to occur where the event originates, improving efficiency. These classes can be made marshallable by having them extend Serializable or by providing and registering a custom Externalizer. To deploy a Marshaller instance server-side, use a similar method to that used for filtering and customized events. Procedure 7.11. Deploying a Marshaller Create a JAR file with the converter implementation within it. Each factory must have a name assigned to it via the org.infinispan.filter.NamedFactory annotation. Create a META-INF/services/org.infinispan.commons.marshall.Marshaller file within the JAR file and within it, write the fully qualified class name of the marshaller class implementation Deploy the JAR file in the JBoss Data Grid Server by performing any of the following options: Procedure 7.12. Option 1: Deploy the JAR through the deployment scanner. Copy the JAR to the USDJDG_HOME/standalone/deployments/ directory. The deployment scanner actively monitors this directory and will deploy the newly placed file. Procedure 7.13. Option 2: Deploy the JAR through the CLI Connect to the desired instance with the CLI: Once connected execute the deploy command: Procedure 7.14. Option 3: Deploy the JAR as a custom module Connect to the JDG server by running the below command: The jar containing the Custom Marshaller must be defined as a module for the Server; to add this substitute the desired name of the module and the .jar name in the below command, adding additional dependencies as necessary for the Custom Marshaller: In a different window add the newly added module as a dependency to the org.infinispan module by editing USDJDG_HOME/modules/system/layers/base/org/infinispan/main/module.xml . In this file add the following entry: Restart the JDG server. The Marshaller can be deployed either in a separate jar, or in the same jar as the CacheEventConverter, and/or CacheEventFilter instances. Note Only the deployment of a single Marshaller instance is supported. If multiple marshaller instances are deployed, warning messages will be displayed as a reminder indicating which marshaller instance will be used. Report a bug 7.7.6. Remote Event Clustering and Failover When a client adds a remote listener, it is installed in a single node in the cluster, which is in charge of sending events back to the client for all affected operations that occur cluster-wide. In a clustered environment, when the node containing the listener goes down, the Hot Rod client implementation transparently fails over the client listener registration to a different node. This may result in a gap in event consumption, which can be solved using one of the following solutions. State Delivery The @ClientListener annotation has an optional includeCurrentState parameter, which when enabled, has the server send CacheEntryCreatedEvent event instances for all existing cache entries to the client. As this behavior is driven by the client it detects when the node where the listener is registered goes offline and automatically registers the listener on another node in the cluster. By enabling includeCurrentState clients may recompute their state or computation in the event the Hot Rod client transparently fails over registered listeners. The performance of the includeCurrentState parameter is impacted by the cache size, and therefore it is disabled by default. @ClientCacheFailover Rather than relying on receiving state, users can define a method with the @ClientCacheFailover annotation, which receives ClientCacheFailoverEvent parameter inside the client listener implementation. If the node where a Hot Rod client has registered a client listener fails, the Hot Rod client detects it transparently, and fails over all listeners registered in the node that failed to another node. During this failover, the client may miss some events. To avoid this, the includeCurrentState parameter can be set to true. With this enabled a client is able to clear its data, receive all of the CacheEntryCreatedEvent instances, and cache these events with all keys. Alternatively, Hot Rod clients can be made aware of failover events by adding a callback handler. This callback method is an efficient solution to handling cluster topology changes affecting client listeners, and allows the client listener to determine how to behave on a failover. Near Caching takes this approach and clears the near cache upon receiving a ClientCacheFailoverEvent . Example 7.19. @ClientCacheFailover Note The ClientCacheFailoverEvent is only thrown when the node that has the client listener installed fails. Report a bug
[ "import org.infinispan.client.hotrod.annotation.*; import org.infinispan.client.hotrod.event.*; @ClientListener public class EventLogListener { @ClientCacheEntryCreated public void handleCreatedEvent(ClientCacheEntryCreatedEvent e) { System.out.println(e); } @ClientCacheEntryModified public void handleModifiedEvent(ClientCacheEntryModifiedEvent e) { System.out.println(e); } @ClientCacheEntryExpired public void handleExpiredEvent(ClientCacheEntryExpiredEvent e) { System.out.println(e); } @ClientCacheEntryRemoved public void handleRemovedEvent(ClientCacheEntryRemovedEvent e) { System.out.println(e); } }", "RemoteCache<Integer, String> cache = rcm.getCache(); cache.addClientListener(new EventLogListener());", "EventLogListener listener = cache.removeClientListener(listener);", "./bin/standalone.sh", "<properties> <infinispan.version>6.3.0-Final-redhat-1</infinispan.version> </properties> [...] <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-remote</artifactId> <version>USD{infinispan.version}</version> </dependency>", "import org.infinispan.client.hotrod.annotation.*; import org.infinispan.client.hotrod.event.*; @ClientListener public class EventLogListener { @ClientCacheEntryCreated @ClientCacheEntryModified @ClientCacheEntryRemoved public void handleRemoteEvent(ClientEvent event) { System.out.println(event); } }", "import org.infinispan.client.hotrod.*; RemoteCacheManager rcm = new RemoteCacheManager(); RemoteCache<Integer, String> cache = rcm.getCache(); EventLogListener listener = new EventLogListener(); try { cache.addClientListener(listener); cache.put(1, \"one\"); cache.put(1, \"new-one\"); cache.remove(1); } finally { cache.removeClientListener(listener); }", "ClientCacheEntryCreatedEvent(key=1,dataVersion=1) ClientCacheEntryModifiedEvent(key=1,dataVersion=2) ClientCacheEntryRemovedEvent(key=1)", "package sample; import java.io.Serializable; import org.infinispan.notifications.cachelistener.filter.*; import org.infinispan.metadata.*; @NamedFactory(name = \"basic-filter-factory\") public class BasicKeyValueFilterFactory implements CacheEventFilterFactory { @Override public CacheEventFilter<Integer, String> getKeyValueFilter(final Object[] params) { return new BasicKeyValueFilter(); } static class BasicKeyValueFilter implements CacheEventFilter<Integer, String>, Serializable { @Override public boolean accept(Integer key, String oldValue, Metadata oldMetadata, String newValue, Metadata newMetadata, EventType eventType) { return !\"2\".equals(key); } } }", "[USDJDG_HOME] USD bin/cli.sh --connect=USDIP:USDPORT", "deploy /path/to/artifact.jar", "[USDJDG_HOME] USD bin/cli.sh --connect=USDIP:USDPORT", "module add --name=USDMODULE-NAME --resources=USDJAR-NAME.jar --dependencies=org.infinispan", "<dependencies> [...] <module name=\"USDMODULE-NAME\"> </dependencies>", "@org.infinispan.client.hotrod.annotation.ClientListener(filterFactoryName = \"basic-filter-factory\") public class BasicFilteredEventLogListener extends EventLogListener {}", "import org.infinispan.client.hotrod.*; RemoteCacheManager rcm = new RemoteCacheManager(); RemoteCache<Integer, String> cache = rcm.getCache(); BasicFilteredEventLogListener listener = new BasicFilteredEventLogListener(); try { cache.addClientListener(listener); cache.putIfAbsent(1, \"one\"); cache.replace(1, \"new-one\"); cache.putIfAbsent(2, \"two\"); cache.replace(2, \"new-two\"); cache.putIfAbsent(3, \"three\"); cache.replace(3, \"new-three\"); cache.remove(1); cache.remove(2); cache.remove(3); } finally { cache.removeClientListener(listener); }", "ClientCacheEntryCreatedEvent(key=1,dataVersion=1) ClientCacheEntryModifiedEvent(key=1,dataVersion=2) ClientCacheEntryCreatedEvent(key=3,dataVersion=5) ClientCacheEntryModifiedEvent(key=3,dataVersion=6) ClientCacheEntryRemovedEvent(key=1) ClientCacheEntryRemovedEvent(key=3)", "package sample; import java.io.Serializable; import org.infinispan.notifications.cachelistener.filter.*; import org.infinispan.metadata.*; @NamedFactory(name = \"basic-filter-factory\") public class BasicKeyValueFilterFactory implements CacheEventFilterFactory { @Override public CacheEventFilter<Integer, String> getKeyValueFilter(final Object[] params) { return new BasicKeyValueFilter(params); } static class BasicKeyValueFilter implements CacheEventFilter<Integer, String>, Serializable { private final Object[] params; public BasicKeyValueFilter(Object[] params) { this.params = params; } @Override public boolean accept(Integer key, String oldValue, Metadata oldMetadata, String newValue, Metadata newMetadata, EventType eventType) { return !params[0].equals(key); } } }", "import org.infinispan.client.hotrod.*; RemoteCacheManager rcm = new RemoteCacheManager(); RemoteCache<Integer, String> cache = rcm.getCache(); BasicFilteredEventLogListener listener = new BasicFilteredEventLogListener(); try { cache.addClientListener(listener, new Object[]{3}, null); // <- Filter parameter passed cache.putIfAbsent(1, \"one\"); cache.replace(1, \"new-one\"); cache.putIfAbsent(2, \"two\"); cache.replace(2, \"new-two\"); cache.putIfAbsent(3, \"three\"); cache.replace(3, \"new-three\"); cache.remove(1); cache.remove(2); cache.remove(3); } finally { cache.removeClientListener(listener); }", "ClientCacheEntryCreatedEvent(key=1,dataVersion=1) ClientCacheEntryModifiedEvent(key=1,dataVersion=2) ClientCacheEntryCreatedEvent(key=2,dataVersion=3) ClientCacheEntryModifiedEvent(key=2,dataVersion=4) ClientCacheEntryRemovedEvent(key=1) ClientCacheEntryRemovedEvent(key=2)", "[USDJDG_HOME] USD bin/cli.sh --connect=USDIP:USDPORT", "deploy /path/to/artifact.jar", "[USDJDG_HOME] USD bin/cli.sh --connect=USDIP:USDPORT", "module add --name=USDMODULE-NAME --resources=USDJAR-NAME.jar --dependencies=org.infinispan", "<dependencies> [...] <module name=\"USDMODULE-NAME\"> </dependencies>", "import org.infinispan.notifications.cachelistener.filter.*; @NamedFactory(name = \"value-added-converter-factory\") class ValueAddedConverterFactory implements CacheEventConverterFactory { // The following types correspond to the Key, Value, and the returned Event, respectively. public CacheEventConverter<Integer, String, ValueAddedEvent> getConverter(final Object[] params) { return new ValueAddedConverter(); } static class ValueAddedConverter implements CacheEventConverter<Integer, String, ValueAddedEvent> { public ValueAddedEvent convert(Integer key, String oldValue, Metadata oldMetadata, String newValue, Metadata newMetadata, EventType eventType) { return new ValueAddedEvent(key, value); } } } // Must be Serializable or Externalizable. class ValueAddedEvent implements Serializable { final Integer key; final String value; ValueAddedEvent(Integer key, String value) { this.key = key; this.value = value; } }", "sample.ValueAddedConverterFactor", "@ClientListener(converterFactoryName = \"value-added-converter-factory\") public class CustomEventLogListener { ... }", "import org.infinispan.notifications.cachelistener.filter.CacheEventConverterFactory; import org.infinispan.notifications.cachelistener.filter.CacheEventConverter; class DynamicCacheEventConverterFactory implements CacheEventConverterFactory { // The following types correspond to the Key, Value, and the returned Event, respectively. public CacheEventConverter<Integer, String, CustomEvent> getConverter(final Object[] params) { return new DynamicCacheEventConverter(params); } } // Serializable, Externalizable or marshallable with Infinispan Externalizers needed when running in a cluster class DynamicCacheEventConverter implements CacheEventConverter<Integer, String, CustomEvent>, Serializable { final Object[] params; DynamicCacheEventConverter(Object[] params) { this.params = params; } public CustomEvent convert(Integer key, String oldValue, Metadata metadata, String newValue, Metadata prevMetadata, EventType eventType) { // If the key matches a key given via parameter, only send the key information if (params[0].equals(key)) return new ValueAddedEvent(key, null); return new ValueAddedEvent(key, newValue); } }", "RemoteCache<Integer, String> cache = rcm.getCache(); cache.addClientListener(new EventLogListener(), null, new Object[]{1});", "import org.infinispan.client.hotrod.annotation.*; import org.infinispan.client.hotrod.event.*; @ClientListener(converterFactoryName = \"value-added-converter-factory\") public class CustomEventLogListener { @ClientCacheEntryCreated @ClientCacheEntryModified @ClientCacheEntryRemoved public void handleRemoteEvent(ClientCacheEntryCustomEvent<ValueAddedEvent> event) { System.out.println(event); } }", "import org.infinispan.client.hotrod.*; RemoteCacheManager rcm = new RemoteCacheManager(); RemoteCache<Integer, String> cache = rcm.getCache(); CustomEventLogListener listener = new CustomEventLogListener(); try { cache.addClientListener(listener); cache.put(1, \"one\"); cache.put(1, \"new-one\"); cache.remove(1); } finally { cache.removeClientListener(listener); }", "ClientCacheEntryCustomEvent(eventData=ValueAddedEvent{key=1, value='one'}, eventType=CLIENT_CACHE_ENTRY_CREATED) ClientCacheEntryCustomEvent(eventData=ValueAddedEvent{key=1, value='new-one'}, eventType=CLIENT_CACHE_ENTRY_MODIFIED) ClientCacheEntryCustomEvent(eventData=ValueAddedEvent{key=1, value='null'}, eventType=CLIENT_CACHE_ENTRY_REMOVED", "[USDJDG_HOME] USD bin/cli.sh --connect=USDIP:USDPORT", "deploy /path/to/artifact.jar", "[USDJDG_HOME] USD bin/cli.sh --connect=USDIP:USDPORT", "module add --name=USDMODULE-NAME --resources=USDJAR-NAME.jar --dependencies=org.infinispan", "<dependencies> [...] <module name=\"USDMODULE-NAME\"> </dependencies>", "import org.infinispan.client.hotrod.annotation.*; import org.infinispan.client.hotrod.event.*; @ClientListener public class EventLogListener { // @ClientCacheFailover public void handleFailover(ClientCacheFailoverEvent e) { // Deal with client failover, e.g. clear a near cache. } }" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/sect-Remote_Event_Listeners_Hot_Rod
Chapter 5. Configuring custom certificate authorities
Chapter 5. Configuring custom certificate authorities You can encrypt connections by using custom certificate authorities (CAs) with the MicroShift service. 5.1. How custom certificate authorities work in MicroShift The default API server certificate is issued by an internal MicroShift cluster certificate authority (CA). Clients outside of the cluster cannot verify the API server certificate by default. This certificate can be replaced by a custom server certificate that is issued externally by a custom CA that clients trust. The following steps illustrate the workflow in MicroShift: Copy the certificates and keys to the preferred directory in the host operating system. Ensure that the files are accessible by root only. Update the MicroShift configuration for each custom CA by specifying the certificate names and new fully qualified domain name (FQDN) in the MicroShift /etc/microshift/config.yaml configuration file. Each certificate configuration can contain the following values: The certificate file location is a required value. A single common name containing the API server DNS and IP address or IP address range. Tip In most cases, MicroShift generates a new kubeconfig for your custom CA that includes the IP address or range that you specify. The exception is when wildcards are specified for the IP address. In this case, MicroShift generates a kubeconfig with the public IP address of the server. To use wildcards, you must update the kubeconfig file with your specific details. Multiple Subject Alternative Names (SANs) containing the API server DNS and IP addresses or a wildcard certificate. You can provide additional DNS names for each certificate. After the MicroShift service restarts, you must copy the generated kubeconfig files to the client. Configure additional CAs on the client system. For example, you can update CA bundles in the Red Hat Enterprise Linux (RHEL) truststore. The certificates and keys are read from the specified file location on the host. Testing and validation of configuration is done from the client. External server certificates are not automatically renewed. You must manually rotate your external certificates. Note If any validation fails, the MicroShift service skips the custom configuration and uses the default certificate to start. The priority is to continue the service uninterrupted. MicroShift logs errors when the service starts. Common errors include expired certificates, missing files, or incorrect IP addresses. Important Custom server certificates have to be validated against CA data configured in the trust root of the host operating system. For information, see The system-wide truststore . 5.2. Configuring custom certificate authorities To configure externally generated certificates and domain names using custom certificate authorities (CAs), add them to the MicroShift /etc/microshift/config.yaml configuration file. You must also configure the host operating system trust root. Note Externally generated kubeconfig files are created in the /var/lib/microshift/resources/kubeadmin/<hostname>/kubeconfig directory. If you need to use localhost in addition to externally generated configurations, retain the original kubeconfig file in its default location. The localhost kubeconfig file uses the self-signed certificate authority. Prerequisites The OpenShift CLI ( oc ) is installed. You have access to the cluster as a user with the cluster administration role. The certificate authority has issued the custom certificates. A MicroShift /etc/microshift/config.yaml configuration file exists. Procedure Copy the custom certificates you want to add to the trust root of the MicroShift host. Ensure that the certificate and private keys are only accessible to MicroShift. For each custom CA that you need, add an apiServer section called namedCertificates to the /etc/microshift/config.yaml MicroShift configuration file by using the following example: apiServer: namedCertificates: - certPath: ~/certs/api_fqdn_1.crt 1 keyPath: ~/certs/api_fqdn_1.key 2 - certPath: ~/certs/api_fqdn_2.crt keyPath: ~/certs/api_fqdn_2.key names: 3 - api_fqdn_1 - *.apps.external.com 1 Add the full path to the certificate. 2 Add the full path to the certificate key. 3 Optional. Add a list of explicit DNS names. Leading wildcards are allowed. If no names are provided, the implicit names are extracted from the certificates. Restart the {microshift-service} to apply the certificates by running the following command: USD systemctl microshift restart Wait a few minutes for the system to restart and apply the custom server. New kubeconfig files are generated in the /var/lib/microshift/resources/kubeadmin/ directory. Copy the kubeconfig files to the client. If you specified wildcards for the IP address, update the kubeconfig to remove the public IP address of the server and replace that IP address with the specific wildcard range you want to use. From the client, use the following steps: Specify the kubeconfig to use by running the following command: USD export KUBECONFIG=~/custom-kubeconfigs/kubeconfig 1 1 Use the location of the copied kubeconfig file as the path. Check that the certificates are applied by using the following command: USD oc --certificate-authority ~/certs/ca.ca get node Example output oc get node NAME STATUS ROLES AGE VERSION dhcp-1-235-195.arm.example.com Ready control-plane,master,worker 76m v1.31.3 Add the new CA file to the USDKUBECONFIG environment variable by running the following command: USD oc config set clusters.microshift.certificate-authority /tmp/certificate-authority-data-new.crt Verify that the new kubeconfig file contains the new CA by running the following command: USD oc config view --flatten Example externally generated kubeconfig file apiVersion: v1 clusters: - cluster: certificate-authority: /tmp/certificate-authority-data-new.crt 1 server: https://api.ci-ln-k0gim2b-76ef8.aws-2.ci.openshift.org:6443 name: ci-ln-k0gim2b-76ef8 contexts: - context: cluster: ci-ln-k0gim2b-76ef8 user: name: current-context: kind: Config preferences: {} 1 The certificate-authority-data section is not present in externally generated kubeconfig files. It is added with the oc config set command used previously. Verify the subject and issuer of your customized API server certificate authority by running the following command: USD curl --cacert /tmp/caCert.pem https://USD{fqdn_name}:6443/healthz -v Example output Important Either replace the certificate-authority-data in the generated kubeconfig file with the new rootCA or add the certificate-authority-data to the trust root of the operating system. Do not use both methods. Configure additional CAs in the trust root of the operating system. For example, in the RHEL Client truststore on the client system. See The system-wide truststore for details. Updating the certificate bundle with the configuration that contains the CA is recommended. If you do not want to configure your certificate bundles, you can alternately use the oc login localhost:8443 --certificate-authority=/path/to/cert.crt command, but this method is not preferred. 5.3. Custom certificates reserved name values The following certificate problems cause MicroShift to ignore certificates dynamically and log an error: The certificate files do not exist on the disk or are not readable. The certificate is not parsable. The certificate overrides the internal certificates IP addresses or DNS names in a SubjectAlternativeNames (SAN) field. Do not use a reserved name when configuring SANs. Table 5.1. Reserved Names values Address Type Comment localhost DNS 127.0.0.1 IP Address 10.42.0.0 IP Address Cluster Network 10.43.0.0/16,10.44.0.0/16 IP Address Service Network 169.254.169.2/29 IP Address br-ex Network kubernetes.default.svc DNS openshift.default.svc DNS svc.cluster.local DNS 5.4. Troubleshooting custom certificates To troubleshoot the implementation of custom certificates, you can take the following steps. Procedure From MicroShift, ensure that the certificate is served by the kube-apiserver and verify that the certificate path is appended to the --tls-sni-cert-key FLAG by running the following command: USD journalctl -u microshift -b0 | grep tls-sni-cert-key Example output Jan 24 14:53:00 localhost.localdomain microshift[45313]: kube-apiserver I0124 14:53:00.649099 45313 flags.go:64] FLAG: --tls-sni-cert-key="[/home/eslutsky/dev/certs/server.crt,/home/eslutsky/dev/certs/server.key;/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key;/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key;/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key From the client, ensure that the kube-apiserver is serving the correct certificate by running the following command: USD openssl s_client -connect <SNI_ADDRESS>:6443 -showcerts | openssl x509 -text -noout -in - | grep -C 1 "Alternative\|CN" 5.5. Cleaning up and recreating the custom certificates To stop the MicroShift services, clean up the custom certificates and recreate the custom certificates, use the following steps. Procedure Stop the MicroShift services and clean up the custom certificates by running the following command: USD sudo microshift-cleanup-data --cert Example output Stopping MicroShift services Removing MicroShift certificates MicroShift service was stopped Cleanup succeeded Restart the MicroShift services to recreate the custom certificates by running the following command: USD sudo systemctl start microshift 5.6. Additional resources OpenShift: Add an API server named certificate RHEL: Creating and managing TLS keys and certificates The system-wide truststore OpenShift CLI Reference: oc login
[ "apiServer: namedCertificates: - certPath: ~/certs/api_fqdn_1.crt 1 keyPath: ~/certs/api_fqdn_1.key 2 - certPath: ~/certs/api_fqdn_2.crt keyPath: ~/certs/api_fqdn_2.key names: 3 - api_fqdn_1 - *.apps.external.com", "systemctl microshift restart", "export KUBECONFIG=~/custom-kubeconfigs/kubeconfig 1", "oc --certificate-authority ~/certs/ca.ca get node", "get node NAME STATUS ROLES AGE VERSION dhcp-1-235-195.arm.example.com Ready control-plane,master,worker 76m v1.31.3", "oc config set clusters.microshift.certificate-authority /tmp/certificate-authority-data-new.crt", "oc config view --flatten", "apiVersion: v1 clusters: - cluster: certificate-authority: /tmp/certificate-authority-data-new.crt 1 server: https://api.ci-ln-k0gim2b-76ef8.aws-2.ci.openshift.org:6443 name: ci-ln-k0gim2b-76ef8 contexts: - context: cluster: ci-ln-k0gim2b-76ef8 user: name: current-context: kind: Config preferences: {}", "curl --cacert /tmp/caCert.pem https://USD{fqdn_name}:6443/healthz -v", "Server certificate: subject: CN=kas-test-cert_server start date: Mar 12 11:39:46 2024 GMT expire date: Mar 12 11:39:46 2025 GMT subjectAltName: host \"dhcp-1-235-3.arm.eng.rdu2.redhat.com\" matched cert's \"dhcp-1-235-3.arm.eng.rdu2.redhat.com\" issuer: CN=kas-test-cert_ca SSL certificate verify ok.", "journalctl -u microshift -b0 | grep tls-sni-cert-key", "Jan 24 14:53:00 localhost.localdomain microshift[45313]: kube-apiserver I0124 14:53:00.649099 45313 flags.go:64] FLAG: --tls-sni-cert-key=\"[/home/eslutsky/dev/certs/server.crt,/home/eslutsky/dev/certs/server.key;/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-external-signer/kube-external-serving/server.key;/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-localhost-signer/kube-apiserver-localhost-serving/server.key;/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.crt,/var/lib/microshift/certs/kube-apiserver-service-network-signer/kube-apiserver-service-network-serving/server.key", "openssl s_client -connect <SNI_ADDRESS>:6443 -showcerts | openssl x509 -text -noout -in - | grep -C 1 \"Alternative\\|CN\"", "sudo microshift-cleanup-data --cert", "Stopping MicroShift services Removing MicroShift certificates MicroShift service was stopped Cleanup succeeded", "sudo systemctl start microshift" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/configuring/microshift-custom-ca
Chapter 5. Troubleshooting RHOSP dynamic routing
Chapter 5. Troubleshooting RHOSP dynamic routing Diagnosing problems in a Red Hat OpenStack Platform (RHOSP) environment that uses dynamic routing begins with examining the appropriate logs and querying the various FRRouting components with VTY shell. The topics included in this section are: OVN BGP agent and FRRouting logs Using VTY shell commands for troubleshooting BGP 5.1. OVN BGP agent and FRRouting logs The Red Hat OpenStack Platform (RHOSP) OVN BGP agent writes its logs on the Compute and Networker nodes at this location: /var/log/containers/stdouts/ovn_bgp_agent.log . The Free Range Routing (FRRouting, or FRR) components such as the BGP, BFD, and Zebra daemons write their logs on all RHOSP nodes at this location: /var/log/containers/frr/frr.log 5.2. Using VTY shell commands for troubleshooting BGP You can use the shell for Virtual Terminal Interface (VTY shell) to interact with the Free Range Routing (FRRouting, or FRR) daemons. In Red Hat OpenStack Platform, FRR daemons like bgpd run inside a container. Using the VTY shell can help you troubleshoot BGP routing issues. Prerequisites You must have sudo privileges on the host where you want to run VTY shell commands. Procedure Log in to the host where you want to troubleshoot the BGP daemon, bgpd . Typically, bgpd runs on all of the overcloud nodes. Enter the FRR container. You have two options for running VTY shell commands: Interactive mode Type sudo vtysh once to enter interactive mode to run multiple VTY shell commands. Example Direct mode Preface each VTY shell command with sudo vtysh -c . Example Here are several VTY shell BGP daemon troubleshooting commands: Tip Omit the ip argument when you are using IPv6. Display a particular routing table or all routing tables: Output routes advertised to a BGP peer Output routes received from a BGP peer Additional resources Displaying BGP Information in the FRRouting User Guide
[ "sudo podman exec -it frr bash", "sudo vtysh > show bgp summary", "sudo vtysh -c 'show bgp summary'", "> show ip bgp <IPv4_address> | all > show bgp <IPv6_address> | all", "> show ip bgp neighbors <router-ID> <advertised-routes>", "> show ip bgp neighbors <router-ID> <received-routes>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_dynamic_routing_in_red_hat_openstack_platform/troubleshoot-rhosp-dynamic-routing_rhosp-bgp
Chapter 2. Deploying an Identity Management Server in a Container
Chapter 2. Deploying an Identity Management Server in a Container This chapter describes how you can install a fresh Identity Management server to start a new topology. Before you begin, read Section 2.1, "Prerequisites" and Section 2.2, "Available Configuration in Server and Replica Containers" . Choose one of the following installation procedures. If you are not sure which certificate authority (CA) configuration fits your situation, see Determining What CA Configuration to Use in the Linux Domain Identity, Authentication, and Policy Guide . Section 2.3, "Installing an Identity Management Server in a Container: Basic Installation" Section 2.4, "Installing an Identity Management Server in a Container: External CA" Section 2.5, "Installing an Identity Management Server in a Container: Without a CA" After you are done, read Section 2.6, " Steps After Installation" . 2.1. Prerequisites Upgrade the Atomic Host system before installing the container. See Upgrading and Downgrading in the Red Hat Enterprise Linux Atomic Host 7 Installation and Configuration Guide . 2.2. Available Configuration in Server and Replica Containers What Is Available Domain level 1 or higher Domain level 0 is not available for containers. See also Displaying and Raising the Domain Level . As a consequence, servers running in containers can be joined in a replication agreement only with Identity Management servers based on Red Hat Enterprise Linux 7.3 or later. Mixed container and non-container deployments A single Identity Management domain topology can include both container-based and RPM-based servers. What Is Not Available Changing server components in a deployed container Do not make runtime modifications of deployed containers. If you need to change or reinstall a server component, such as integrated DNS or Vault, create a new replica. Upgrading between different Linux distributions Do not change the platform on which an ipa-server container image runs. For example, do not change an image running on Red Hat Enterprise Linux to Fedora, Ubuntu, or CentOS. Similarly, do not change an image running on Fedora, Ubuntu, or CentOS to Red Hat Enterprise Linux. Identity Management supports only upgrades to later versions of Red Hat Enterprise Linux. Downgrading the system with a running container Do not downgrade the system on which an ipa-server container image runs. Upstream containers on Atomic Host Do not install upstream container images, such as the FreeIPA ipa-server image, on Atomic Host. Install only the container images available in Red Hat Enterprise Linux. Multiple containers on a single Atomic Host Install only one ipa-server container image on a single Atomic Host. 2.3. Installing an Identity Management Server in a Container: Basic Installation This procedure shows how to install a containerized Identity Management server in the default certificate authority (CA) configuration with an integrated CA. Before You Start Note that the container installation uses the same default configuration as a non-container installation using ipa-server-install . To specify custom configuration, add additional options to the atomic install command used in the procedure below: Atomic options available for the ipa-server container. For a complete list, see the container help page. Identity Management installer options accepted by ipa-server-install , described in Installing and Uninstalling an Identity Management Server in the Linux Domain Identity, Authentication, and Policy Guide . Procedure Use the atomic install rhel7/ipa-server publish --hostname fully_qualified_domain_name ipa-server-install command to start the installation. The container requires its own host name. Use a different host name for the container than the host name of the Atomic Host system. The container's host name must be resolvable via DNS or the /etc/hosts file. Note Installing a server or replica container does not enroll the Atomic Host system itself to the Identity Management domain. If you use the Atomic Host system's host name for the server or replica, you will be unable to enroll the Atomic Host system later. Important Always use the --hostname option with atomic install when installing the server or replica container. Because --hostname is considered an Atomic option in this case, not an Identity Management installer option, use it before the ipa-server-install option. The installation ignores --hostname when used after ipa-server-install . If you are installing a server with integrated DNS, add also the --ip-address option to specify the public IP address of the Atomic Host that is reachable from the network. You can use --ip-address multiple times. Warning Unless you want to install the container for testing purposes only, always use the publish option. Without publish , no ports will be published to the Atomic Host system, and the server will not be reachable from outside the container. The ipa-server-install setup script starts: The process is the same as when using the ipa-server-install utility to install a non-container server. Example 2.1. Installation Command Examples Command syntax for installing the ipa-server container: To install a server container named server-container and use default values for the Identity Management server settings: To install a server with a custom host name ( --hostname ) and integrated DNS ( --setup-dns ): 2.4. Installing an Identity Management Server in a Container: External CA This procedure describes how to install a server with an integrated Identity Management certificate authority (CA) that is subordinate to an external CA. A containerized Identity Management server and the Atomic Host system share only the parts of the file system that are mounted using a bind mount into the container. Therefore, operations related to external files must be performed from within this volume. The ipa-server container image uses the /var/lib/<container_name>/ directory to store persistent files on the Atomic Host file system. The persistent storage volume maps to the /data/ directory inside the container. Before You Start Note that the container installation uses the same default configuration as a non-container installation using ipa-server-install . To specify custom configuration, add additional options to the atomic install command used in the procedure below: Atomic options available for the ipa-server container. For a complete list, see the container help page. Identity Management installer options accepted by ipa-server-install , described in Installing and Uninstalling an Identity Management Server in the Linux Domain Identity, Authentication, and Policy Guide . Procedure Use the atomic install rhel7/ipa-server publish --hostname fully_qualified_domain_name ipa-server-install --external-ca command to start the installation. The container requires its own host name. Use a different host name for the container than the host name of the Atomic Host system. The container's host name must be resolvable via DNS or the /etc/hosts file. Note Installing a server or replica container does not enroll the Atomic Host system itself to the Identity Management domain. If you use the Atomic Host system's host name for the server or replica, you will be unable to enroll the Atomic Host system later. Important Always use the --hostname option with atomic install when installing the server or replica container. Because --hostname is considered an Atomic option in this case, not an Identity Management installer option, use it before the ipa-server-install option. The installation ignores --hostname when used after ipa-server-install . If you are installing a server with integrated DNS, add also the --ip-address option to specify the public IP address of the Atomic Host that is reachable from the network. You can use --ip-address multiple times. Warning Unless you want to install the container for testing purposes only, always use the publish option. Without publish , no ports will be published to the Atomic Host system, and the server will not be reachable from outside the container. The ipa-server-install setup script starts: The process is the same as when using the ipa-server-install utility to install a non-container server. The container installation script generates the certificate signing request (CSR) in the /var/lib/<container_name>/root/ipa.csr file. Submit the CSR to the external CA, and retrieve the issued certificate and the CA certificate chain for the issuing CA. See Installing a Server with an External CA as the Root CA in the Linux Domain Identity, Authentication, and Policy Guide for details. Copy the signed CA certificate and the root CA certificate into the /var/lib/<container_name>/ directory. Use the atomic run command with the --external-cert-file option to specify the location of the certificates. Specify the location relative to the /data/ directory because the installer performs the call from inside the container The installation resumes. The installer now uses the supplied certificates to set up the subordinate CA. 2.5. Installing an Identity Management Server in a Container: Without a CA This procedure describes how to install a server without an integrated Identity Management certificate authority (CA). A containerized Identity Management server and the Atomic Host system share only the parts of the file system that are mounted using a bind mount into the container. Therefore, operations related to external files must be performed from within this volume. The ipa-server container image uses the /var/lib/<container_name>/ directory to store persistent files on the Atomic Host file system. The persistent storage volume maps to the /data/ directory inside the container. Before You Start Note that the container installation uses the same default configuration as a non-container installation using ipa-server-install . To specify custom configuration, add additional options to the atomic install command used in the procedure below: Atomic options available for the ipa-server container. For a complete list, see the container help page. Identity Management installer options accepted by ipa-server-install , described in Installing and Uninstalling an Identity Management Server in the Linux Domain Identity, Authentication, and Policy Guide . Procedure Manually create the persistent storage directory for the container at /var/lib/<container_name>/ : Copy the files containing the certificate chain into the directory: See Installing Without a CA in the Linux Domain Identity, Authentication, and Policy Guide for details on the required files. Use the atomic install command, and provide the required certificates from the third-party authority: The container requires its own host name. Use a different host name for the container than the host name of the Atomic Host system. The container's host name must be resolvable via DNS or the /etc/hosts file. Note Installing a server or replica container does not enroll the Atomic Host system itself to the Identity Management domain. If you use the Atomic Host system's host name for the server or replica, you will be unable to enroll the Atomic Host system later. Important Always use the --hostname option with atomic install when installing the server or replica container. Because --hostname is considered an Atomic option in this case, not an Identity Management installer option, use it before the ipa-server-install option. The installation ignores --hostname when used after ipa-server-install . If you are installing a server with integrated DNS, add also the --ip-address option to specify the public IP address of the Atomic Host that is reachable from the network. You can use --ip-address multiple times. Warning Unless you want to install the container for testing purposes only, always use the publish option. Without publish , no ports will be published to the Atomic Host system, and the server will not be reachable from outside the container. The ipa-server-install setup script starts: The process is the same as when using the ipa-server-install utility to install a non-container server. 2.6. Steps After Installation To run the container, use the atomic run command: If you specified a name for the container when you installed it: A running ipa-server container works in the same way as in a standard Identity Management deployment on bare-metal or virtual machine systems. For example, you can enroll hosts to the domain or manage the topology using the command-line interface, the web UI, or JSONRPC-API in the same way as RPM-based Identity Management systems.
[ "The log file for this installation can be found in /var/log/ipaserver-install.log ======================================== This program will set up the IPA Server. [... output truncated ...]", "atomic install [ --name <container_name> ] rhel7/ipa-server [ Atomic options ] [ ipa-server-install | ipa-replica-install ] [ ipa-server-install or ipa-replica-install options ]", "atomic install --name server-container rhel7/ipa-server publish --hostname server.example.com ipa-server-install --ip-address 2001:DB8::1111", "atomic install rhel7/ipa-server publish --hostname server.example.com ipa-server-install --setup-dns --ip-address 2001:DB8::1111", "The log file for this installation can be found in /var/log/ipaserver-install.log ======================================== This program will set up the IPA Server. [... output truncated ...]", "cp /root/{ipa,ca}.crt /var/lib/server-container/.", "atomic run rhel7/ipa-server ipa-server-install --external-cert-file /data/ipa.crt --external-cert-file /data/ca.crt", "mkdir -p /var/lib/ipa-server", "cp /root/server-*.p12 /var/lib/ipa-server/.", "atomic install --name server-container rhel7/ipa-server publish --hostname server.example.com ipa-server-install --dirsrv-cert-file=/data/server-dirsrv-cert.p12 --dirsrv-pin=1234 --http-cert-file=/data/server-http-cert.p12 --http-pin=1234 --pkinit-cert-file=/data/server-pkinit-cert.p12 --pkinit-pin=1234", "The log file for this installation can be found in /var/log/ipaserver-install.log ======================================== This program will set up the IPA Server. [... output truncated ...]", "atomic run rhel7/ipa-server", "atomic run --name server-container rhel7/ipa-server" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/using_containerized_identity_management_services/deploying-an-identity-management-server-in-a-container
Chapter 10. HelmChartRepository [helm.openshift.io/v1beta1]
Chapter 10. HelmChartRepository [helm.openshift.io/v1beta1] Description HelmChartRepository holds cluster-wide configuration for proxied Helm chart repository Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object Observed status of the repository within the cluster.. 10.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description connectionConfig object Required configuration for connecting to the chart repo description string Optional human readable repository description, it can be used by UI for displaying purposes disabled boolean If set to true, disable the repo usage in the cluster/namespace name string Optional associated human readable repository name, it can be used by UI for displaying purposes 10.1.2. .spec.connectionConfig Description Required configuration for connecting to the chart repo Type object Property Type Description ca object ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this config map is openshift-config. tlsClientConfig object tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret is openshift-config. url string Chart repository URL 10.1.3. .spec.connectionConfig.ca Description ca is an optional reference to a config map by name containing the PEM-encoded CA bundle. It is used as a trust anchor to validate the TLS certificate presented by the remote server. The key "ca-bundle.crt" is used to locate the data. If empty, the default system roots are used. The namespace for this config map is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 10.1.4. .spec.connectionConfig.tlsClientConfig Description tlsClientConfig is an optional reference to a secret by name that contains the PEM-encoded TLS client certificate and private key to present when connecting to the server. The key "tls.crt" is used to locate the client certificate. The key "tls.key" is used to locate the private key. The namespace for this secret is openshift-config. Type object Required name Property Type Description name string name is the metadata.name of the referenced secret 10.1.5. .status Description Observed status of the repository within the cluster.. Type object Property Type Description conditions array conditions is a list of conditions and their statuses conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } 10.1.6. .status.conditions Description conditions is a list of conditions and their statuses Type array 10.1.7. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 10.2. API endpoints The following API endpoints are available: /apis/helm.openshift.io/v1beta1/helmchartrepositories DELETE : delete collection of HelmChartRepository GET : list objects of kind HelmChartRepository POST : create a HelmChartRepository /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name} DELETE : delete a HelmChartRepository GET : read the specified HelmChartRepository PATCH : partially update the specified HelmChartRepository PUT : replace the specified HelmChartRepository /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name}/status GET : read status of the specified HelmChartRepository PATCH : partially update status of the specified HelmChartRepository PUT : replace status of the specified HelmChartRepository 10.2.1. /apis/helm.openshift.io/v1beta1/helmchartrepositories Table 10.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of HelmChartRepository Table 10.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HelmChartRepository Table 10.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.5. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepositoryList schema 401 - Unauthorized Empty HTTP method POST Description create a HelmChartRepository Table 10.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.7. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.8. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 202 - Accepted HelmChartRepository schema 401 - Unauthorized Empty 10.2.2. /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name} Table 10.9. Global path parameters Parameter Type Description name string name of the HelmChartRepository Table 10.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a HelmChartRepository Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.12. Body parameters Parameter Type Description body DeleteOptions schema Table 10.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HelmChartRepository Table 10.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.15. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HelmChartRepository Table 10.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.17. Body parameters Parameter Type Description body Patch schema Table 10.18. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HelmChartRepository Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.20. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.21. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 401 - Unauthorized Empty 10.2.3. /apis/helm.openshift.io/v1beta1/helmchartrepositories/{name}/status Table 10.22. Global path parameters Parameter Type Description name string name of the HelmChartRepository Table 10.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified HelmChartRepository Table 10.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.25. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HelmChartRepository Table 10.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.27. Body parameters Parameter Type Description body Patch schema Table 10.28. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HelmChartRepository Table 10.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.30. Body parameters Parameter Type Description body HelmChartRepository schema Table 10.31. HTTP responses HTTP code Reponse body 200 - OK HelmChartRepository schema 201 - Created HelmChartRepository schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/config_apis/helmchartrepository-helm-openshift-io-v1beta1
Administration guide
Administration guide Red Hat OpenShift Dev Spaces 3.14 Administering Red Hat OpenShift Dev Spaces 3.14 Jana Vrbkova [email protected] Red Hat Developer Group Documentation Team [email protected]
[ "dsc", "memoryLimit: 6G memoryRequest: 512M cpuRequest: 1000m cpuLimit: 4000m", "memoryLimit: 128M memoryRequest: 64M cpuRequest: 10m cpuLimit: 1000m", "memoryLimit: 256M memoryRequest: 64M cpuRequest: 50m cpuLimit: 500m", "dsc server:delete", "dsc server:deploy --platform openshift", "dsc server:status", "dsc dashboard:open", "create namespace openshift-devspaces", "bash prepare-restricted-environment.sh --devworkspace_operator_index registry.redhat.io/redhat/redhat-operator-index:v4.16 --devworkspace_operator_version \"v0.28.0\" --prod_operator_index \"registry.redhat.io/redhat/redhat-operator-index:v4.16\" --prod_operator_package_name \"devspaces\" --prod_operator_bundle_name \"devspacesoperator\" --prod_operator_version \"v3.14.0\" --my_registry \" <my_registry> \" 1", "dsc server:deploy --platform=openshift --olm-channel stable --catalog-source-name=devspaces-disconnected-install --catalog-source-namespace=openshift-marketplace --skip-devworkspace-operator --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml", "ghcr.io/ansible/ansible-workspace-env-reference@sha256:03d7f0fe6caaae62ff2266906b63d67ebd9cf6e4a056c7c0a0c1320e6cfbebce registry.access.redhat.com/ubi8/python-39@sha256:301fec66443f80c3cc507ccaf72319052db5a1dc56deb55c8f169011d4bbaacb", ".ansible.com .ansible-galaxy-ng.s3.dualstack.us-east-1.amazonaws.com", "get checluster devspaces -n openshift-devspaces -o jsonpath='{.status.cheURL}'", "spec: <component> : <property_to_configure> : <value>", "dsc server:deploy --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml --platform <chosen_platform>", "oc get configmap che -o jsonpath='{.data. <configured_property> }' -n openshift-devspaces", "oc edit checluster/devspaces -n openshift-devspaces", "oc get configmap che -o jsonpath='{.data. <configured_property> }' -n openshift-devspaces", "apiVersion: org.eclipse.che/v2 kind: CheCluster metadata: name: devspaces namespace: openshift-devspaces spec: components: {} devEnvironments: {} networking: {}", "spec: components: devEnvironments: defaultNamespace: template: <workspace_namespace_template_>", "kind: Namespace apiVersion: v1 metadata: name: <project_name> 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-namespace annotations: che.eclipse.org/username: <username>", "apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret", "apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap", "apiVersion: v1 kind: Secret metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret", "apiVersion: v1 kind: ConfigMap metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap", "apiVersion: v1 kind: Secret metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data data: ca.crt: <base64 encoded data content here>", "apiVersion: v1 kind: ConfigMap metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data data: ca.crt: <data content here>", "apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret", "apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap", "apiVersion: v1 kind: Secret metadata: name: custom-data annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret", "apiVersion: v1 kind: ConfigMap metadata: name: custom-data annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap", "apiVersion: v1 kind: Secret metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data data: ca.crt: <base64 encoded data content here>", "apiVersion: v1 kind: ConfigMap metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data data: ca.crt: <data content here>", "apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret", "apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap", "apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret data: mykey: myvalue", "apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap data: mykey: myvalue", "apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret stringData: mykey: <data_content_here> otherkey: <data_content_here>", "apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap data: mykey: <data content here> otherkey: <data content here>", "apiVersion: org.eclipse.che/v2 kind: CheCluster spec: components: cheServer: extraProperties: CHE_LOGS_APPENDERS_IMPL: json", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: scaler namespace: openshift-devspaces spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: <deployment_name> 1", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: devspaces-scaler namespace: openshift-devspaces spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: devspaces minReplicas: 2 maxReplicas: 5 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 75", "spec: devEnvironments: maxNumberOfWorkspacesPerUser: <kept_workspaces_limit> 1", "oc get checluster --all-namespaces -o=jsonpath=\"{.items[*].metadata.namespace}\"", "oc patch checluster/devspaces -n openshift-devspaces \\ 1 --type='merge' -p '{\"spec\":{\"devEnvironments\":{\"maxNumberOfWorkspacesPerUser\": <kept_workspaces_limit> }}}' 2", "spec: devEnvironments: maxNumberOfRunningWorkspacesPerUser: <running_workspaces_limit> 1", "oc get checluster --all-namespaces -o=jsonpath=\"{.items[*].metadata.namespace}\"", "oc patch checluster/devspaces -n openshift-devspaces \\ 1 --type='merge' -p '{\"spec\":{\"devEnvironments\":{\"maxNumberOfRunningWorkspacesPerUser\": <running_workspaces_limit> }}}' 2", "oc create configmap che-git-self-signed-cert --from-file=ca.crt= <path_to_certificate> \\ 1 --from-literal=githost= <git_server_url> -n openshift-devspaces 2", "oc label configmap che-git-self-signed-cert app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces", "spec: devEnvironments: trustedCerts: gitTrustedCertsConfigMapName: che-git-self-signed-cert", "[http \"https://10.33.177.118:3000\"] sslCAInfo = /etc/config/che-git-tls-creds/certificate", "spec: devEnvironments: nodeSelector: key: value", "spec: devEnvironments: tolerations: - effect: NoSchedule key: key value: value operator: Equal", "spec: components: [...] pluginRegistry: openVSXURL: <your_open_vsx_registy> [...]", "kind: ConfigMap apiVersion: v1 metadata: name: user-configmap namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config data:", "kind: ConfigMap apiVersion: v1 metadata: name: user-settings-xml namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.m2 data: settings.xml: | <settings xmlns=\"http://maven.apache.org/SETTINGS/1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd\"> <localRepository>/home/user/.m2/repository</localRepository> <interactiveMode>true</interactiveMode> <offline>false</offline> </settings>", "kind: Secret apiVersion: v1 metadata: name: user-secret namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config data:", "kind: Secret apiVersion: v1 metadata: name: user-certificates namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /etc/pki/ca-trust/source/anchors stringData: trusted-certificates.crt: |", "kind: Secret apiVersion: v1 metadata: name: user-env namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: env stringData: ENV_VAR_1: value_1 ENV_VAR_2: value_2", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: user-pvc namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config spec:", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: user-pvc namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config controller.devfile.io/mount-to-devworkspace: 'true' annotations: controller.devfile.io/mount-path: /home/user/data controller.devfile.io/read-only: 'true' spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi volumeMode: Filesystem", "(memory limit) * (number of images) * (number of nodes in the cluster)", "git clone https://github.com/che-incubator/kubernetes-image-puller cd kubernetes-image-puller/deploy/openshift", "oc new-project <k8s-image-puller>", "oc process -f serviceaccount.yaml | oc apply -f - oc process -f configmap.yaml | oc apply -f - oc process -f app.yaml | oc apply -f -", "oc get deployment,daemonset,pod --namespace <k8s-image-puller>", "oc get configmap <kubernetes-image-puller> --output yaml", "patch checluster/devspaces --namespace openshift-devspaces --type='merge' --patch '{ \"spec\": { \"components\": { \"imagePuller\": { \"enable\": true } } } }'", "patch checluster/devspaces --namespace openshift-devspaces --type='merge' --patch '{ \"spec\": { \"components\": { \"imagePuller\": { \"enable\": true, \"spec\": { \"images\": \" NAME-1 = IMAGE-1 ; NAME-2 = IMAGE-2 \" 1 } } } } }'", "create namespace k8s-image-puller", "apply -f - <<EOF apiVersion: che.eclipse.org/v1alpha1 kind: KubernetesImagePuller metadata: name: k8s-image-puller-images namespace: k8s-image-puller spec: images: \"__NAME-1__=__IMAGE-1__;__NAME-2__=__IMAGE-2__\" 1 EOF", "spec: devEnvironments: defaultPlugins: - editor: eclipse/che-theia/next 1 plugins: 2 - 'https://your-web-server/plugin.yaml'", "package main import ( \"io/ioutil\" \"net/http\" \"go.uber.org/zap\" ) var logger *zap.SugaredLogger func event(w http.ResponseWriter, req *http.Request) { switch req.Method { case \"GET\": logger.Info(\"GET /event\") case \"POST\": logger.Info(\"POST /event\") } body, err := req.GetBody() if err != nil { logger.With(\"err\", err).Info(\"error getting body\") return } responseBody, err := ioutil.ReadAll(body) if err != nil { logger.With(\"error\", err).Info(\"error reading response body\") return } logger.With(\"body\", string(responseBody)).Info(\"got event\") } func activity(w http.ResponseWriter, req *http.Request) { switch req.Method { case \"GET\": logger.Info(\"GET /activity, doing nothing\") case \"POST\": logger.Info(\"POST /activity\") body, err := req.GetBody() if err != nil { logger.With(\"error\", err).Info(\"error getting body\") return } responseBody, err := ioutil.ReadAll(body) if err != nil { logger.With(\"error\", err).Info(\"error reading response body\") return } logger.With(\"body\", string(responseBody)).Info(\"got activity\") } } func main() { log, _ := zap.NewProduction() logger = log.Sugar() http.HandleFunc(\"/event\", event) http.HandleFunc(\"/activity\", activity) logger.Info(\"Added Handlers\") logger.Info(\"Starting to serve\") http.ListenAndServe(\":8080\", nil) }", "git clone https://github.com/che-incubator/telemetry-server-example cd telemetry-server-example podman build -t registry/organization/telemetry-server-example:latest . podman push registry/organization/telemetry-server-example:latest", "kubectl apply -f manifest_with_[ingress|route].yaml -n openshift-devspaces", "mvn io.quarkus:quarkus-maven-plugin:2.7.1.Final:create -DprojectGroupId=mygroup -DprojectArtifactId=devworkspace-telemetry-example-plugin -DprojectVersion=1.0.0-SNAPSHOT", "<!-- Required --> <dependency> <groupId>org.eclipse.che.incubator.workspace-telemetry</groupId> <artifactId>backend-base</artifactId> <version>LATEST VERSION FROM PREVIOUS STEP</version> </dependency> <!-- Used to make http requests to the telemetry server --> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-jackson</artifactId> </dependency>", "<settings xmlns=\"http://maven.apache.org/SETTINGS/1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd\"> <servers> <server> <id>che-incubator</id> <username>YOUR GITHUB USERNAME</username> <password>YOUR GITHUB TOKEN</password> </server> </servers> <profiles> <profile> <id>github</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>central</id> <url>https://repo1.maven.org/maven2</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>false</enabled></snapshots> </repository> <repository> <id>che-incubator</id> <url>https://maven.pkg.github.com/che-incubator/che-workspace-telemetry-client</url> </repository> </repositories> </profile> </profiles> </settings>", "package org.my.group; import java.util.Optional; import javax.enterprise.context.Dependent; import javax.enterprise.inject.Alternative; import org.eclipse.che.incubator.workspace.telemetry.base.BaseConfiguration; import org.eclipse.microprofile.config.inject.ConfigProperty; @Dependent @Alternative public class MainConfiguration extends BaseConfiguration { @ConfigProperty(name = \"welcome.message\") 1 Optional<String> welcomeMessage; 2 }", "package org.my.group; import java.util.HashMap; import java.util.Map; import javax.enterprise.context.Dependent; import javax.enterprise.inject.Alternative; import javax.inject.Inject; import org.eclipse.che.incubator.workspace.telemetry.base.AbstractAnalyticsManager; import org.eclipse.che.incubator.workspace.telemetry.base.AnalyticsEvent; import org.eclipse.che.incubator.workspace.telemetry.finder.DevWorkspaceFinder; import org.eclipse.che.incubator.workspace.telemetry.finder.UsernameFinder; import org.eclipse.microprofile.rest.client.inject.RestClient; import org.slf4j.Logger; import static org.slf4j.LoggerFactory.getLogger; @Dependent @Alternative public class AnalyticsManager extends AbstractAnalyticsManager { private static final Logger LOG = getLogger(AbstractAnalyticsManager.class); public AnalyticsManager(MainConfiguration mainConfiguration, DevWorkspaceFinder devworkspaceFinder, UsernameFinder usernameFinder) { super(mainConfiguration, devworkspaceFinder, usernameFinder); mainConfiguration.welcomeMessage.ifPresentOrElse( 1 (str) -> LOG.info(\"The welcome message is: {}\", str), () -> LOG.info(\"No welcome message provided\") ); } @Override public boolean isEnabled() { return true; } @Override public void destroy() {} @Override public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) { LOG.info(\"The received event is: {}\", event); 2 } @Override public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) { } @Override public void onActivity() {} }", "quarkus.arc.selected-alternatives=MainConfiguration,AnalyticsManager", "spec: template: attributes: workspaceEnv: - name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT value: '4167'", "mvn --settings=settings.xml quarkus:dev -Dquarkus.http.port=USD{DEVWORKSPACE_TELEMETRY_BACKEND_PORT}", "INFO [org.ecl.che.inc.AnalyticsManager] (Quarkus Main Thread) No welcome message provided INFO [io.quarkus] (Quarkus Main Thread) devworkspace-telemetry-example-plugin 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 0.323s. Listening on: http://localhost:4167 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, kubernetes-client, rest-client, rest-client-jackson, resteasy, resteasy-jsonb, smallrye-context-propagation, smallrye-openapi, swagger-ui, vertx]", "INFO [io.qua.dep.dev.RuntimeUpdatesProcessor] (Aesh InputStream Reader) Live reload disabled INFO [org.ecl.che.inc.AnalyticsManager] (executor-thread-2) The received event is: Edit Workspace File in Che", "@Override public boolean isEnabled() { return true; }", "package org.my.group; import java.util.Map; import javax.ws.rs.Consumes; import javax.ws.rs.POST; import javax.ws.rs.Path; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; @RegisterRestClient public interface TelemetryService { @POST @Path(\"/event\") 1 @Consumes(MediaType.APPLICATION_JSON) Response sendEvent(Map<String, Object> payload); }", "org.my.group.TelemetryService/mp-rest/url=http://little-telemetry-server-che.apps-crc.testing", "@Dependent @Alternative public class AnalyticsManager extends AbstractAnalyticsManager { @Inject @RestClient TelemetryService telemetryService; @Override public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) { Map<String, Object> payload = new HashMap<String, Object>(properties); payload.put(\"event\", event); telemetryService.sendEvent(payload); }", "@Override public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) {}", "public class AnalyticsManager extends AbstractAnalyticsManager { private long inactiveTimeLimit = 60000 * 3; @Override public void onActivity() { if (System.currentTimeMillis() - lastEventTime >= inactiveTimeLimit) { onEvent(WORKSPACE_INACTIVE, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties); } }", "@Override public void destroy() { onEvent(WORKSPACE_STOPPED, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties); }", "FROM registry.access.redhat.com/ubi8/openjdk-11:1.11 ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' COPY --chown=185 target/quarkus-app/lib/ /deployments/lib/ COPY --chown=185 target/quarkus-app/*.jar /deployments/ COPY --chown=185 target/quarkus-app/app/ /deployments/app/ COPY --chown=185 target/quarkus-app/quarkus/ /deployments/quarkus/ EXPOSE 8080 USER 185 ENTRYPOINT [\"java\", \"-Dquarkus.http.host=0.0.0.0\", \"-Djava.util.logging.manager=org.jboss.logmanager.LogManager\", \"-Dquarkus.http.port=USD{DEVWORKSPACE_TELEMETRY_BACKEND_PORT}\", \"-jar\", \"/deployments/quarkus-run.jar\"]", "mvn package && build -f src/main/docker/Dockerfile.jvm -t image:tag .", "FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5 WORKDIR /work/ RUN chown 1001 /work && chmod \"g+rwX\" /work && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 CMD [\"./application\", \"-Dquarkus.http.host=0.0.0.0\", \"-Dquarkus.http.port=USDDEVWORKSPACE_TELEMETRY_BACKEND_PORT}\"]", "mvn package -Pnative -Dquarkus.native.container-build=true && build -f src/main/docker/Dockerfile.native -t image:tag .", "schemaVersion: 2.1.0 metadata: name: devworkspace-telemetry-backend-plugin version: 0.0.1 description: A Demo telemetry backend displayName: Devworkspace Telemetry Backend components: - name: devworkspace-telemetry-backend-plugin attributes: workspaceEnv: - name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT value: '4167' container: image: YOUR IMAGE 1 env: - name: WELCOME_MESSAGE 2 value: 'hello world!'", "oc create configmap --from-file=plugin.yaml -n openshift-devspaces telemetry-plugin-yaml", "kind: Deployment apiVersion: apps/v1 metadata: name: apache spec: replicas: 1 selector: matchLabels: app: apache template: metadata: labels: app: apache spec: volumes: - name: plugin-yaml configMap: name: telemetry-plugin-yaml defaultMode: 420 containers: - name: apache image: 'registry.redhat.io/rhscl/httpd-24-rhel7:latest' ports: - containerPort: 8080 protocol: TCP resources: {} volumeMounts: - name: plugin-yaml mountPath: /var/www/html strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600 --- kind: Service apiVersion: v1 metadata: name: apache spec: ports: - protocol: TCP port: 8080 targetPort: 8080 selector: app: apache type: ClusterIP --- kind: Route apiVersion: route.openshift.io/v1 metadata: name: apache spec: host: apache-che.apps-crc.testing to: kind: Service name: apache weight: 100 port: targetPort: 8080 wildcardPolicy: None", "oc apply -f manifest.yaml", "curl apache-che.apps-crc.testing/plugin.yaml", "components: - name: telemetry-plugin plugin: uri: http://apache-che.apps-crc.testing/plugin.yaml", "spec: devEnvironments: defaultPlugins: - editor: eclipse/che-theia/next 1 plugins: 2 - 'http://apache-che.apps-crc.testing/plugin.yaml'", "spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: \" <key1=value1,key2=value2> \" 1", "spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: \"org.eclipse.che.api.workspace.server.WorkspaceManager=DEBUG\"", "spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: \"che.infra.request-logging=TRACE\"", "dsc server:logs -d /home/user/che-logs/", "Red Hat OpenShift Dev Spaces logs will be available in '/tmp/chectl-logs/1648575098344'", "dsc server:logs -n my-namespace", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: devworkspace-controller namespace: openshift-devspaces 1 spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token interval: 10s 2 port: metrics scheme: https tlsConfig: insecureSkipVerify: true namespaceSelector: matchNames: - openshift-operators selector: matchLabels: app.kubernetes.io/name: devworkspace-controller", "oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true", "oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'", "oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring", "oc create configmap grafana-dashboard-dwo --from-literal=dwo-dashboard.json=\"USD(curl https://raw.githubusercontent.com/devfile/devworkspace-operator/main/docs/grafana/openshift-console-dashboard.json)\" -n openshift-config-managed", "oc label configmap grafana-dashboard-dwo console.openshift.io/dashboard=true -n openshift-config-managed", "spec: components: metrics: enable: <boolean> 1", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: che-host namespace: openshift-devspaces 1 spec: endpoints: - interval: 10s 2 port: metrics scheme: http namespaceSelector: matchNames: - openshift-devspaces 3 selector: matchLabels: app.kubernetes.io/name: devspaces", "kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: prometheus-k8s namespace: openshift-devspaces 1 rules: - verbs: - get - list - watch apiGroups: - '' resources: - services - endpoints - pods", "kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: view-devspaces-openshift-monitoring-prometheus-k8s namespace: openshift-devspaces 1 subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s", "oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true", "oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'", "oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring", "oc create configmap grafana-dashboard-devspaces-server --from-literal=devspaces-server-dashboard.json=\"USD(curl https://raw.githubusercontent.com/eclipse-che/che-server/main/docs/grafana/openshift-console-dashboard.json)\" -n openshift-config-managed", "oc label configmap grafana-dashboard-devspaces-server console.openshift.io/dashboard=true -n openshift-config-managed", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-devspaces spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-devspaces 1 podSelector: {} 2 policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-apiserver namespace: openshift-devspaces 1 spec: podSelector: matchLabels: app.kubernetes.io/name: devworkspace-webhook-server 2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-apiserver policyTypes: - Ingress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-workspaces-namespaces namespace: openshift-devspaces 1 spec: podSelector: matchLabels: app.kubernetes.io/component: che-gateway 2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: app.kubernetes.io/component: workspaces-namespace policyTypes: - Ingress", "oc create project openshift-devspaces", "oc create secret TLS <tls_secret_name> \\ 1 --key <key_file> \\ 2 --cert <cert_file> \\ 3 -n openshift-devspaces", "oc label secret <tls_secret_name> \\ 1 app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces", "spec: networking: hostname: <hostname> 1 tlsSecretName: <secret> 2", "cat ca-cert-for-devspaces-*.pem | tr -d '\\r' > custom-ca-certificates.pem", "oc create configmap custom-ca-certificates --from-file=custom-ca-certificates.pem --namespace=openshift-devspaces", "oc label configmap custom-ca-certificates app.kubernetes.io/component=ca-bundle app.kubernetes.io/part-of=che.eclipse.org --namespace=openshift-devspaces", "oc get configmap --namespace=openshift-devspaces --output='jsonpath={.items[0:].data.custom-ca-certificates\\.pem}' --selector=app.kubernetes.io/component=ca-bundle,app.kubernetes.io/part-of=che.eclipse.org", "oc get pod --selector=app.kubernetes.io/component=devspaces --output='jsonpath={.items[0].spec.volumes[0:].configMap.name}' --namespace=openshift-devspaces | grep ca-certs-merged", "oc exec -t deploy/devspaces --namespace=openshift-devspaces -- cat /public-certs/custom-ca-certificates.pem", "oc logs deploy/devspaces --namespace=openshift-devspaces | grep custom-ca-certificates.pem", "for certificate in ca-cert*.pem ; do openssl x509 -in USDcertificate -digest -sha256 -fingerprint -noout | cut -d= -f2; done", "oc exec -t deploy/devspaces --namespace=openshift-devspaces -- keytool -list -keystore /home/user/cacerts | grep --after-context=1 custom-ca-certificates.pem", "oc get configmap che-trusted-ca-certs --namespace= <workspace_namespace> --output='jsonpath={.data.custom-ca-certificates\\.custom-ca-certificates\\.pem}'", "oc get pod --namespace= <workspace_namespace> --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' --output='jsonpath={.items[0:].spec.volumes[0:].configMap.name}' | grep che-trusted-ca-certs", "oc get pod --namespace= <workspace_namespace> --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' --output='jsonpath={.items[0:].spec.containers[0:]}' | jq 'select (.volumeMounts[].name == \"che-trusted-ca-certs\") | .name'", "oc get pod --namespace= <workspace_namespace> --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' --output='jsonpath={.items[0:].metadata.name}' \\", "oc exec <workspace_pod_name> --namespace= <workspace_namespace> -- cat /public-certs/custom-ca-certificates.custom-ca-certificates.pem", "spec: networking: labels: <labels> 1 domain: <domain> 2 annotations: <annotations> 3", "spec: devEnvironments: storage: perUserStrategyPvcConfig: claimSize: <claim_size> 1 storageClass: <storage_class_name> 2 perWorkspaceStrategyPvcConfig: claimSize: <claim_size> 3 storageClass: <storage_class_name> 4 pvcStrategy: <pvc_strategy> 5", "spec: devEnvironments: storage: pvc: pvcStrategy: 'per-user' 1", "per-user: 10Gi", "per-workspace: 5Gi", "spec: devEnvironments: storage: pvc: pvcStrategy: ' <strategy_name> ' 1 perUserStrategyPvcConfig: 2 claimSize: <resource_quantity> 3 perWorkspaceStrategyPvcConfig: 4 claimSize: <resource_quantity> 5", "cat > my-samples.json <<EOF [ { \"displayName\": \" <display_name> \", 1 \"description\": \" <description> \", 2 \"tags\": <tags> , 3 \"url\": \" <url> \", 4 \"icon\": { \"base64data\": \" <base64data> \", 5 \"mediatype\": \" <mediatype> \" 6 } } ] EOF", "create configmap getting-started-samples --from-file=my-samples.json -n openshift-devspaces", "label configmap getting-started-samples app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=getting-started-samples -n openshift-devspaces", "apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: devspaces-dashboard-customization namespace: openshift-devspaces annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /public/dashboard/assets/branding labels: app.kubernetes.io/component: devspaces-dashboard-secret app.kubernetes.io/part-of: che.eclipse.org data: loader.svg: <Base64_encoded_content_of_the_image> 1 type: Opaque EOF", "kind: Secret apiVersion: v1 metadata: name: github-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: github che.eclipse.org/scm-server-endpoint: <github_server_url> 2 che.eclipse.org/scm-github-disable-subdomain-isolation: 'false' 3 type: Opaque stringData: id: <GitHub_OAuth_Client_ID> 4 secret: <GitHub_OAuth_Client_Secret> 5", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "kind: Secret apiVersion: v1 metadata: name: gitlab-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: gitlab che.eclipse.org/scm-server-endpoint: <gitlab_server_url> 2 type: Opaque stringData: id: <GitLab_Application_ID> 3 secret: <GitLab_Client_Secret> 4", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url> 2 type: Opaque stringData: id: <Bitbucket_Client_ID> 3 secret: <Bitbucket_Client_Secret> 4", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket type: Opaque stringData: id: <Bitbucket_Oauth_Consumer_Key> 2 secret: <Bitbucket_Oauth_Consumer_Secret> 3", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "openssl genrsa -out private.pem 2048 && openssl pkcs8 -topk8 -inform pem -outform pem -nocrypt -in private.pem -out privatepkcs8.pem && cat privatepkcs8.pem | sed 's/-----BEGIN PRIVATE KEY-----//g' | sed 's/-----END PRIVATE KEY-----//g' | tr -d '\\n' > privatepkcs8-stripped.pem && openssl rsa -in private.pem -pubout > public.pub && cat public.pub | sed 's/-----BEGIN PUBLIC KEY-----//g' | sed 's/-----END PUBLIC KEY-----//g' | tr -d '\\n' > public-stripped.pub && openssl rand -base64 24 > bitbucket-consumer-key && openssl rand -base64 24 > bitbucket-shared-secret", "kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/component: oauth-scm-configuration app.kubernetes.io/part-of: che.eclipse.org annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url> 2 type: Opaque stringData: private.key: <Content_of_privatepkcs8-stripped.pem> 3 consumer.key: <Content_of_bitbucket-consumer-key> 4 shared_secret: <Content_of_bitbucket-shared-secret> 5", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "kind: Secret apiVersion: v1 metadata: name: azure-devops-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: azure-devops type: Opaque stringData: id: <Microsoft_Azure_DevOps_Services_OAuth_App_ID> 2 secret: <Microsoft_Azure_DevOps_Services_OAuth_Client_Secret> 3", "oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF", "USER_ROLES= <name> 1", "OPERATOR_NAMESPACE=USD(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={\".items[0].metadata.namespace\"} --all-namespaces)", "kubectl apply -f - <<EOF kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: USD{USER_ROLES} labels: app.kubernetes.io/part-of: che.eclipse.org rules: - verbs: - <verbs> 1 apiGroups: - <apiGroups> 2 resources: - <resources> 3 EOF", "kubectl apply -f - <<EOF kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: USD{USER_ROLES} labels: app.kubernetes.io/part-of: che.eclipse.org subjects: - kind: ServiceAccount name: devspaces-operator namespace: USD{OPERATOR_NAMESPACE} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: USD{USER_ROLES} EOF", "kubectl patch checluster devspaces --patch '{\"spec\": {\"components\": {\"cheServer\": {\"clusterRoles\": [\"'USD{USER_ROLES}'\"]}}}}' --type=merge -n openshift-devspaces", "kubectl patch checluster devspaces --patch '{\"spec\": {\"devEnvironments\": {\"user\": {\"clusterRoles\": [\"'USD{USER_ROLES}'\"]}}}}' --type=merge -n openshift-devspaces", "spec: networking: auth: advancedAuthorization: allowUsers: - <allow_users> 1 allowGroups: - <allow_groups> 2 denyUsers: - <deny_users> 3 denyGroups: - <deny_groups> 4", "oc get users", "oc delete user <username>", "NODE_ROLE=master", "NODE_ROLE=worker", "VERSION=4.12.0", "cat << EOF | butane | oc apply -f - variant: openshift version: USD{VERSION} metadata: labels: machineconfiguration.openshift.io/role: USD{NODE_ROLE} name: 99-podman-dev-fuse-USD{NODE_ROLE} storage: files: - path: /etc/crio/crio.conf.d/99-podman-fuse 1 mode: 0644 overwrite: true contents: 2 inline: | [crio.runtime.workloads.podman-fuse] 3 activation_annotation = \"io.openshift.podman-fuse\" 4 allowed_annotations = [ \"io.kubernetes.cri-o.Devices\" 5 ] [crio.runtime] allowed_devices = [\"/dev/fuse\"] 6 EOF", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.27.9 ip-10-0-136-243.ec2.internal Ready master 34m v1.27.9 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.27.9 ip-10-0-142-249.ec2.internal Ready master 34m v1.27.9 ip-10-0-153-11.ec2.internal Ready worker 28m v1.27.9 ip-10-0-153-150.ec2.internal Ready master 34m v1.27.9", "io.openshift.podman-fuse: '' io.kubernetes.cri-o.Devices: /dev/fuse", "oc get nodes", "oc debug node/ <nodename>", "sh-4.4# stat /host/etc/crio/crio.conf.d/99-podman-fuse", "spec: components: pluginRegistry: openVSXURL: \" <url_of_an_open_vsx_registry_instance> \" 1", "https://www.open-vsx.org/extension/ <publisher> / <extension>", "git checkout devspaces-USDPRODUCT_VERSION-rhel-8", "{ \"id\": \" <publisher> . <extension> \" }", "{ \"id\": \" <publisher> . <extension> \", \"download\": \" <url_to_download_vsix_file> \", \"version\": \" <extension_version> \" }", "./build.sh -o <username> -r quay.io -t custom", "podman push quay.io/ <username/plugin_registry:custom>", "spec: components: pluginRegistry: deployment: containers: - image: quay.io/ <username/plugin_registry:custom> openVSXURL: ''", "\"trustedExtensionAuthAccess\": [ \"<publisher1>.<extension1>\", \"<publisher2>.<extension2>\" ]", "env: - name: VSCODE_TRUSTED_EXTENSIONS value: \"<publisher1>.<extension1>,<publisher2>.<extension2>\"", "kind: ConfigMap apiVersion: v1 metadata: name: trusted-extensions labels: controller.devfile.io/mount-to-devworkspace: 'true' controller.devfile.io/watch-configmap: 'true' annotations: controller.devfile.io/mount-as: env data: VSCODE_TRUSTED_EXTENSIONS: '<publisher1>.<extension1>,<publisher2>.<extension2>'", "{ \"folders\": [ { \"name\": \"project-1\", \"path\": \"/projects/project-1\" }, { \"name\": \"project-2\", \"path\": \"/projects/project-2\" } ] }", "{ \"folders\": [ { \"name\": \"project-name\", \"path\": \".\" } ] }", "env: - name: VSCODE_DEFAULT_WORKSPACE value: \"/projects/project-name/workspace-file\"", "env: - name: VSCODE_DEFAULT_WORKSPACE value: \"/\"", "dsc server:update -n openshift-devspaces", "bash prepare-restricted-environment.sh --devworkspace_operator_index registry.redhat.io/redhat/redhat-operator-index:v4.16 --devworkspace_operator_version \"v0.28.0\" --prod_operator_index \"registry.redhat.io/redhat/redhat-operator-index:v4.16\" --prod_operator_package_name \"devspaces\" --prod_operator_bundle_name \"devspacesoperator\" --prod_operator_version \"v3.14.0\" --my_registry \" <my_registry> \" 1", "dsc server:update --che-operator-image=\"USDTAG\" -n openshift-devspaces --k8spodwaittimeout=1800000", "spec: conversion: strategy: None status:", "oc delete sub devworkspace-operator -n openshift-operators 1", "oc get csv | grep devworkspace", "oc delete csv <devworkspace_operator.vX.Y.Z> -n openshift-operators 1", "cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: devworkspace-operator namespace: openshift-operators spec: channel: fast name: devworkspace-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic 1 startingCSV: devworkspace-operator.v0.28.0 EOF", "dsc server:delete" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.14/html-single/administration_guide/installing-devspaces
Chapter 8. Installing a cluster on GCP into a shared VPC
Chapter 8. Installing a cluster on GCP into a shared VPC In OpenShift Container Platform version 4.15, you can install a cluster into a shared Virtual Private Cloud (VPC) on Google Cloud Platform (GCP). In this installation method, the cluster is configured to use a VPC from a different GCP project. A shared VPC enables an organization to connect resources from multiple projects to a common VPC network. You can communicate within the organization securely and efficiently by using internal IP addresses from that network. For more information about shared VPC, see Shared VPC overview in the GCP documentation . The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 8.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. You have a GCP host project which contains a shared VPC network. You configured a GCP project to host the cluster. This project, known as the service project, must be attached to the host project. For more information, see Attaching service projects in the GCP documentation . You have a GCP service account that has the required GCP permissions in both the host and service projects. 8.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 8.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 8.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 8.5. Creating the installation files for GCP To install OpenShift Container Platform on Google Cloud Platform (GCP) into a shared VPC, you must generate the install-config.yaml file and modify it so that the cluster uses the correct VPC networks, DNS zones, and project names. 8.5.1. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for GCP 8.5.2. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 8.5.3. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 8.5.4. Sample customized install-config.yaml file for shared VPC installation There are several configuration parameters which are required to install OpenShift Container Platform on GCP using a shared VPC. The following is a sample install-config.yaml file which demonstrates these fields. Important This sample YAML file is provided for reference only. You must modify this file with the correct values for your environment and cluster. apiVersion: v1 baseDomain: example.com credentialsMode: Passthrough 1 metadata: name: cluster_name platform: gcp: computeSubnet: shared-vpc-subnet-1 2 controlPlaneSubnet: shared-vpc-subnet-2 3 network: shared-vpc 4 networkProjectID: host-project-name 5 projectID: service-project-name 6 region: us-east1 defaultMachinePlatform: tags: 7 - global-tag1 controlPlane: name: master platform: gcp: tags: 8 - control-plane-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: - name: worker platform: gcp: tags: 9 - compute-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 10 1 credentialsMode must be set to Passthrough or Manual . See the "Prerequisites" section for the required GCP permissions that your service account must have. 2 The name of the subnet in the shared VPC for compute machines to use. 3 The name of the subnet in the shared VPC for control plane machines to use. 4 The name of the shared VPC. 5 The name of the host project where the shared VPC exists. 6 The name of the GCP project where you want to install the cluster. 7 8 9 Optional. One or more network tags to apply to compute machines, control plane machines, or all machines. 10 You can optionally provide the sshKey value that you use to access the machines in your cluster. 8.5.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 8.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 8.7. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 8.7.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 8.1. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 8.7.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 8.7.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have added one of the following authentication options to the GCP account that the installation program uses: The IAM Workload Identity Pool Admin role. The following granular permissions: Example 8.2. Required GCP permissions compute.projects.get iam.googleapis.com/workloadIdentityPoolProviders.create iam.googleapis.com/workloadIdentityPoolProviders.get iam.googleapis.com/workloadIdentityPools.create iam.googleapis.com/workloadIdentityPools.delete iam.googleapis.com/workloadIdentityPools.get iam.googleapis.com/workloadIdentityPools.undelete iam.roles.create iam.roles.delete iam.roles.list iam.roles.undelete iam.roles.update iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.getIamPolicy iam.serviceAccounts.list iam.serviceAccounts.setIamPolicy iam.workloadIdentityPoolProviders.get iam.workloadIdentityPools.delete resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.getIamPolicy storage.buckets.setIamPolicy storage.objects.create storage.objects.delete storage.objects.list Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 8.7.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 8.7.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure Add the following granular permissions to the GCP account that the installation program uses: Example 8.3. Required GCP permissions compute.machineTypes.list compute.regions.list compute.zones.list dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.delete dns.resourceRecordSets.list If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 8.8. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 8.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 8.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 8.11. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir <installation_directory>", "controlPlane: platform: gcp: secureBoot: Enabled", "compute: - platform: gcp: secureBoot: Enabled", "platform: gcp: defaultMachinePlatform: secureBoot: Enabled", "controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3", "compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate", "apiVersion: v1 baseDomain: example.com credentialsMode: Passthrough 1 metadata: name: cluster_name platform: gcp: computeSubnet: shared-vpc-subnet-1 2 controlPlaneSubnet: shared-vpc-subnet-2 3 network: shared-vpc 4 networkProjectID: host-project-name 5 projectID: service-project-name 6 region: us-east1 defaultMachinePlatform: tags: 7 - global-tag1 controlPlane: name: master platform: gcp: tags: 8 - control-plane-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 compute: - name: worker platform: gcp: tags: 9 - compute-tag1 type: n2-standard-4 zones: - us-central1-a - us-central1-c replicas: 3 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA... 10", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4", "ls <path_to_ccoctl_output_dir>/manifests", "cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml", "apiVersion: v1 baseDomain: example.com credentialsMode: Manual", "openshift-install create manifests --dir <installation_directory>", "cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/", "cp -a /<path_to_ccoctl_output_dir>/tls .", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_gcp/installing-gcp-shared-vpc
Chapter 18. Managing asset metadata and version history
Chapter 18. Managing asset metadata and version history Most assets within Business Central have metadata and version information associated with them to help you identify and organize them within your projects. You can manage asset metadata and version history from the asset designer in Business Central. Procedure In Business Central, go to Menu Design Projects and click the project name. Select the asset from the list to open the asset designer. In the asset designer window, select Overview . If an asset doesn't have an Overview tab, then no metadata is associated with that asset. Select the Version History or Metadata tab to edit and update version and metadata details. Note Another way to update the working version of an asset is by clicking Latest Version in the top-right corner of the asset designer. Figure 18.1. Latest version of an asset Click Save to save changes.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/assets_metadata_managing_proc
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.372_release_notes/providing-direct-documentation-feedback_openjdk
Chapter 2. Differences from upstream OpenJDK 8
Chapter 2. Differences from upstream OpenJDK 8 Red Hat build of OpenJDK in Red Hat Enterprise Linux (RHEL) contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow RHEL updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 8 changes: FIPS support. Red Hat build of OpenJDK 8 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 8 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 8 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/getting_started_with_red_hat_build_of_openjdk_8/rn-openjdk-diff-from-upstream
C.7. ClusterCacheStats
C.7. ClusterCacheStats org.infinispan.stats.impl.ClusterCacheStatsImpl The ClusterCacheStats component contains statistics such as timings, hit/miss ratio, and operation information for the whole cluster. Table C.12. Attributes Name Description Type Writable activations The total number of activations in the cluster. long No averageReadTime Cluster wide total average number of milliseconds for a read operation on the cache. long No averageRemoveTime Cluster wide total average number of milliseconds for a remove operation in the cache. long No averageWriteTime Cluster wide average number of milliseconds for a write operation in the cache. long No cacheLoaderLoads The total number of cacheloader load operations in the cluster. long No cacheLoaderMisses The total number of cacheloader load misses in the cluster. long No evictions Cluster wide total number of cache eviction operations. long No hitRatio Cluster wide total percentage hit/(hit+miss) ratio for this cache. double No hits Cluster wide total number of cache hits. long No invalidations The total number of invalidations in the cluster. long No misses Cluster wide total number of cache attribute misses. long No numberOfEntries Cluster wide total number of entries currently in the cache. int No numberOfLocksAvailable Total number of exclusive locks available in the cluster. int No numberOfLocksHeld The total number of locks held in the cluster. int No passivations The total number of passivations in the cluster. long No readWriteRatio Cluster wide read/writes ratio for the cache. double No removeHits Cluster wide total number of cache removal hits. double No removeMisses Cluster wide total number of cache removals where keys were not found. long No statisticsEnabled Enables or disables the gathering of statistics by this component. boolean Yes storeWrites The total number of cachestore store operations in the cluster. long No stores Cluster wide total number of cache attribute put operations. long No timeSinceStart Number of seconds since the first cache node started. long No Table C.13. Operations Name Description Signature setStaleStatsTreshold Sets the threshold for cluster wide stats refresh (in milliseconds). void setStaleStatsTreshold(long staleStatsThreshold) resetStatistics Resets statistics gathered by this component. void resetStatistics() Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/clustercachestats
Chapter 6. Creating and uploading a customized RHEL system image to Amazon Web Services by using Insights image builder
Chapter 6. Creating and uploading a customized RHEL system image to Amazon Web Services by using Insights image builder You can create customized RHEL system images by using Insights image builder, and upload those images to the Amazon Web Services (AWS) target environment. Warning Red Hat Hybrid Cloud Console does not support uploading support for uploading the images that you created for the Amazon Web Services (AWS) target environment to GovCloud regions. 6.1. Creating and uploading a customized RHEL system image to AWS by using Insights image builder Complete the following steps to create customized system images using Insights image builder and upload those images to Amazon Web Services (AWS). Prerequisites You have an AWS account created. You have a Red Hat account. Access Portal . Procedure Access Insights image builder . The Insights image builder dashboard appears. Click Create image . The Create image dialog wizard opens. On the Image output page, complete the following steps: From the Release list, select the Release that you want to use: for example, choose Red Hat Enterprise Linux (RHEL). From the Select target environments option, select Amazon Web Services as the target environment . Click . On the Target Environment - Amazon Web Services page, enter your AWS account ID and click . You can find your AWS account ID by accessing the option Account on the AWS console. On the Registration page, select the type of registration that you want to use. You can select from these options: Register images with Red Hat : Register and connect image instances, subscriptions and insights with Red Hat. For details on how to embed an activation key and register systems on first boot, see Creating a customized system image with an embed subscription by using Insights image builder . Register image instances only : Register and connect only image instances and subscriptions with Red Hat. Register later : Register the system after the image creation. Click . Optional: On the Packages page, add packages to your image. See Adding packages during image creation by using Insights image builder . On the Name image page, enter a name for your image and click . If you do not enter a name, you can find the image you created by its UUID. On the Review page, review the details about the image creation and click Create image . After you complete the steps in the Create image wizard, the image builder dashboard is displayed. Insights image builder starts the compose of a RHEL Amazon Machine Image (AMI) for the x86_64 architecture and uploads it to AWS EC2. Then, it will share the AMI image with the account you specified. On the dashboard, you can see details such as the Image UUID , the cloud target environment , the image OS release and the status of the image creation. Possible statuses: Pending: the image upload and cloud registration is being processed. In Progress: the image upload and cloud registration is ongoing. Ready: the image upload and cloud registration is completed Failed: the image upload and cloud registration failed. Note The image build, upload and cloud registration processes can take up to ten minutes to complete. Verification Check if the image status is Ready . It means that the image upload and cloud registration is completed successfully. Note The image artifacts are saved for 14 days and expire after that. Ensure that you transfer the image to your account to avoid losing it. 6.2. Accessing your customized RHEL system image for AWS from your account After the image is built, uploaded, and the cloud registration process status is marked as Ready , you can access the Amazon Web Services (AWS) image you created and shared with your AWS EC2 account. Prerequisites You have access to your AWS Management Console . Procedure Access your AWS account and navigate to Service->EC2. In the upper right menu, verify if you are under the correct region: us-east-1 . In the left side menu, under Images , click AMIs . The dashboard with the Owned by me images opens. From the dropdown menu, choose Private images . You can see the image successfully shared with the AWS account you specified. 6.3. Launching your customized RHEL system image for AWS from your AWS EC2 You can launch the image you successfully shared with the AWS EC2 account you have specified. To do so, follow the steps: Prerequisites You have access to your customized image on AWS. See Accessing your customized RHEL system image for AWS from your account . Procedure From the list of images, select the image you want to launch. On the top of the panel, click Launch . You are redirected to the Choose an Instance Type window. Choose the instance type according to the resources you need to launch your image. Click Review and Launch . Review your instance launch details. You can edit each section, such as Security , Storage , for example, if you need to make any changes. After you finish the review, click Launch . To launch the instance, you must select a public key to access it. Create a new key pair in EC2 and attach it to the new instance. From the drop-down menu list, select Create a new key pair . Enter the name to the new key pair. It generates a new key pair. Click Download Key Pair to save the new key pair on your local system. Then, you can click Launch Instance to launch your instance. You can check the status of the instance, it shows as Initializing . After the instance status is running , the Connect button turns available. Click Connect . A popup window appears with instructions on how to connect by using SSH. Select the preferred connection method to A standalone SSH client and open a terminal. In the location you store your private key, make sure that your key is publicly viewable for SSH to work. To do so, run the command: Connect to your instance by using its Public DNS: Type yes to confirm that you want to continue connecting. As a result, you are connected to your instance over SSH. Verification From a terminal, check if you are able to perform any action while connected to your instance by using SSH. 6.4. Copying your customized RHEL system image for AWS to a different region on your AWS EC2 You can copy the image you successfully shared with the Amazon Web Services EC2 to your own account. Doing so, you grant that the image you shared and copied is available until you delete it, instead of expiring after some time. To copy your image to your own account, follow the steps: Prerequisites You have access to your customized image on AWS. See Accessing your customized RHEL system image for AWS from your account Procedure From the list of Public images , select the image you want to copy. On the top of the panel, click Actions . From the dropdown menu, choose Copy AMI . A popup window appears. Choose the Destination region and click Copy AMI . After the copying process is complete, you are provided with the new AMI ID . You can launch a new instance in the new region. See Launching your customized RHEL system image for AWS from your AWS EC2 Note When you copy an image to a different region, it results in a separate and new AMI in the destination region, with a unique AMI ID . 6.5. Sharing your AWS images to a different region You can share the images you create to different regions and therefore access the same image in more regions, creating more images from a build with the exact same content. After the images status becomes Ready , you can start your instance to the AWS selected region. To push the image to a different region or regions, follow the steps: Prerequisites You created an AWS image. Procedure From the Image Builder table, select the AWS image you want to upload to a different region. From the Node options icon (⫶) , select Share to new region . A Share to new region wizard opens. From the Select region dropdown menu, choose the region or regions to upload your image. Click Share . After the images status becomes Ready , you can start your new instance to the AWS selected region. Verification Click the Node options icon (⫶) and select an AWS region to start the image. The Image Builder table shows the image or images build status that you shared to the new region or regions. Additional resources Launching your customized RHEL system image for AWS from your AWS EC2 .
[ "chmod 400 <your-instance-name.pem>", "ssh -i \"<_your-instance-name.pem_> ec2-user@<_your-instance-IP-address_>\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/creating_customized_images_by_using_insights_image_builder/creating-a-customized-rhel-system-image-for-aws-using-image-builder
Builds
Builds OpenShift Container Platform 4.13 Builds Red Hat OpenShift Documentation Team
[ "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: \"ruby-sample-build\" 1 spec: runPolicy: \"Serial\" 2 triggers: 3 - type: \"GitHub\" github: secret: \"secret101\" - type: \"Generic\" generic: secret: \"secret101\" - type: \"ImageChange\" source: 4 git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: 5 sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\" output: 6 to: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" postCommit: 7 script: \"bundle exec rake test\"", "source: git: uri: https://github.com/openshift/ruby-hello-world.git 1 ref: \"master\" images: - from: kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: - destinationDir: app/dir/injected/dir 2 sourcePath: /usr/lib/somefile.jar contextDir: \"app/dir\" 3 dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 4", "source: dockerfile: \"FROM centos:7\\nRUN yum install -y httpd\" 1", "source: git: uri: https://github.com/openshift/ruby-hello-world.git ref: \"master\" images: 1 - from: 2 kind: ImageStreamTag name: myinputimage:latest namespace: mynamespace paths: 3 - destinationDir: injected/dir 4 sourcePath: /usr/lib/somefile.jar 5 - from: kind: ImageStreamTag name: myotherinputimage:latest namespace: myothernamespace pullSecret: mysecret 6 paths: - destinationDir: injected/dir sourcePath: /usr/lib/somefile.jar", "oc secrets link builder dockerhub", "source: git: 1 uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" contextDir: \"app/dir\" 2 dockerfile: \"FROM openshift/ruby-22-centos7\\nUSER example\" 3", "source: git: uri: \"https://github.com/openshift/ruby-hello-world\" ref: \"master\" httpProxy: http://proxy.example.com httpsProxy: https://proxy.example.com noProxy: somedomain.com, otherdomain.com", "oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=ssh://bitbucket.atlassian.com:7999/*'", "kind: Secret apiVersion: v1 metadata: name: matches-all-corporate-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://*.mycorp.com/* data: --- kind: Secret apiVersion: v1 metadata: name: override-for-my-dev-servers-https-only annotations: build.openshift.io/source-secret-match-uri-1: https://mydev1.mycorp.com/* build.openshift.io/source-secret-match-uri-2: https://mydev2.mycorp.com/* data:", "oc annotate secret mysecret 'build.openshift.io/source-secret-match-uri-1=https://*.mycorp.com/*'", "apiVersion: \"build.openshift.io/v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\" source: git: uri: \"https://github.com/user/app.git\" sourceSecret: name: \"basicsecret\" strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"python-33-centos7:latest\"", "oc set build-secret --source bc/sample-build basicsecret", "oc create secret generic <secret_name> --from-file=<path/to/.gitconfig>", "[http] sslVerify=false", "cat .gitconfig", "[user] name = <name> email = <email> [http] sslVerify = false sslCert = /var/run/secrets/openshift.io/source/client.crt sslKey = /var/run/secrets/openshift.io/source/client.key sslCaInfo = /var/run/secrets/openshift.io/source/cacert.crt", "oc create secret generic <secret_name> --from-literal=username=<user_name> \\ 1 --from-literal=password=<password> \\ 2 --from-file=.gitconfig=.gitconfig --from-file=client.crt=/var/run/secrets/openshift.io/source/client.crt --from-file=cacert.crt=/var/run/secrets/openshift.io/source/cacert.crt --from-file=client.key=/var/run/secrets/openshift.io/source/client.key", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=password=<token> --type=kubernetes.io/basic-auth", "ssh-keygen -t ed25519 -C \"[email protected]\"", "oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/known_hosts> \\ 1 --type=kubernetes.io/ssh-auth", "cat intermediateCA.crt intermediateCA.crt rootCA.crt > ca.crt", "oc create secret generic mycert --from-file=ca.crt=</path/to/file> 1", "oc create secret generic <secret_name> --from-file=ssh-privatekey=<path/to/ssh/private/key> --from-file=<path/to/.gitconfig> --type=kubernetes.io/ssh-auth", "oc create secret generic <secret_name> --from-file=ca.crt=<path/to/certificate> --from-file=<path/to/.gitconfig>", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --type=kubernetes.io/basic-auth", "oc create secret generic <secret_name> --from-literal=username=<user_name> --from-literal=password=<password> --from-file=</path/to/.gitconfig> --from-file=ca-cert=</path/to/file> --type=kubernetes.io/basic-auth", "apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5", "oc create -f <filename>", "oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>", "apiVersion: v1 kind: Secret metadata: name: aregistrykey namespace: myapps type: kubernetes.io/dockerconfigjson 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "oc create -f <your_yaml_file>.yaml", "oc logs secret-example-pod", "oc delete pod secret-example-pod", "apiVersion: v1 kind: Secret metadata: name: test-secret data: username: <username> 1 password: <password> 2 stringData: hostname: myapp.mydomain.com 3 secret.properties: |- 4 property1=valueA property2=valueB", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: # name must match the volume name below - name: secret-volume mountPath: /etc/secret-volume readOnly: true volumes: - name: secret-volume secret: secretName: test-secret restartPolicy: Never", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username restartPolicy: Never", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: name: test-secret key: username", "oc create configmap settings-mvn --from-file=settings.xml=<path/to/settings.xml>", "apiVersion: core/v1 kind: ConfigMap metadata: name: settings-mvn data: settings.xml: | <settings> ... # Insert maven settings here </settings>", "oc create secret generic secret-mvn --from-file=ssh-privatekey=<path/to/.ssh/id_rsa> --type=kubernetes.io/ssh-auth", "apiVersion: core/v1 kind: Secret metadata: name: secret-mvn type: kubernetes.io/ssh-auth data: ssh-privatekey: | # Insert ssh private key, base64 encoded", "source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn secrets: - secret: name: secret-mvn", "oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn\" --build-config-map \"settings-mvn\"", "source: git: uri: https://github.com/wildfly/quickstart.git contextDir: helloworld configMaps: - configMap: name: settings-mvn destinationDir: \".m2\" secrets: - secret: name: secret-mvn destinationDir: \".ssh\"", "oc new-build openshift/wildfly-101-centos7~https://github.com/wildfly/quickstart.git --context-dir helloworld --build-secret \"secret-mvn:.ssh\" --build-config-map \"settings-mvn:.m2\"", "FROM centos/ruby-22-centos7 USER root COPY ./secret-dir /secrets COPY ./config / Create a shell script that will output secrets and ConfigMaps when the image is run RUN echo '#!/bin/sh' > /input_report.sh RUN echo '(test -f /secrets/secret1 && echo -n \"secret1=\" && cat /secrets/secret1)' >> /input_report.sh RUN echo '(test -f /config && echo -n \"relative-configMap=\" && cat /config)' >> /input_report.sh RUN chmod 755 /input_report.sh CMD [\"/bin/sh\", \"-c\", \"/input_report.sh\"]", "#!/bin/sh APP_VERSION=1.0 wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar", "#!/bin/sh exec java -jar app.jar", "FROM jboss/base-jdk:8 ENV APP_VERSION 1.0 RUN wget http://repository.example.com/app/app-USDAPP_VERSION.jar -O app.jar EXPOSE 8080 CMD [ \"java\", \"-jar\", \"app.jar\" ]", "auths: index.docker.io/v1/: 1 auth: \"YWRfbGzhcGU6R2labnRib21ifTE=\" 2 email: \"[email protected]\" 3 docker.io/my-namespace/my-user/my-image: 4 auth: \"GzhYWRGU6R2fbclabnRgbkSp=\"\" email: \"[email protected]\" docker.io/my-namespace: 5 auth: \"GzhYWRGU6R2deesfrRgbkSp=\"\" email: \"[email protected]\"", "oc create secret generic dockerhub --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson", "spec: output: to: kind: \"DockerImage\" name: \"private.registry.com/org/private-image:latest\" pushSecret: name: \"dockerhub\"", "oc set build-secret --push bc/sample-build dockerhub", "oc secrets link builder dockerhub", "strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"docker.io/user/private_repository\" pullSecret: name: \"dockerhub\"", "oc set build-secret --pull bc/sample-build dockerhub", "oc secrets link builder dockerhub", "env: - name: FIELDREF_ENV valueFrom: fieldRef: fieldPath: metadata.name", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: MYVAL valueFrom: secretKeyRef: key: myval name: mysecret", "spec: output: to: kind: \"ImageStreamTag\" name: \"sample-image:latest\"", "spec: output: to: kind: \"DockerImage\" name: \"my-registry.mycompany.com:5000/myimages/myimage:tag\"", "spec: output: to: kind: \"ImageStreamTag\" name: \"my-image:latest\" imageLabels: - name: \"vendor\" value: \"MyCompany\" - name: \"authoritative-source-url\" value: \"registry.mycompany.com\"", "strategy: dockerStrategy: from: kind: \"ImageStreamTag\" name: \"debian:latest\"", "strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile", "dockerStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "dockerStrategy: buildArgs: - name: \"foo\" value: \"bar\"", "strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers", "spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"incremental-image:latest\" 1 incremental: true 2", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"builder-image:latest\" scripts: \"http://somehost.com/scripts_directory\" 1", "sourceStrategy: env: - name: \"DISABLE_ASSET_COMPILATION\" value: \"true\"", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 - name: my-csi-volume 9 mounts: - destinationPath: /opt/app-root/src/some_path 10 source: type: CSI 11 csi: driver: csi.sharedresource.openshift.io 12 readOnly: true 13 volumeAttributes: 14 attribute: value", "strategy: customStrategy: from: kind: \"DockerImage\" name: \"openshift/sti-image-builder\"", "strategy: customStrategy: secrets: - secretSource: 1 name: \"secret1\" mountPath: \"/tmp/secret1\" 2 - secretSource: name: \"secret2\" mountPath: \"/tmp/secret2\"", "customStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "oc set env <enter_variables>", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') }", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: source: git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1", "jenkinsPipelineStrategy: env: - name: \"FOO\" value: \"BAR\"", "oc project <project_name>", "oc new-app jenkins-ephemeral 1", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"nodejs-sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline", "def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo \"Using project: USD{openshift.project()}\" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector(\"all\", [ template : templateName ]).delete() 5 if (openshift.selector(\"secrets\", templateName).exists()) { 6 openshift.selector(\"secrets\", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector(\"bc\", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == \"Complete\") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector(\"dc\", templateName).rollout() timeout(5) { 9 openshift.selector(\"dc\", templateName).related('pods').untilEach(1) { return (it.object().status.phase == \"Running\") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag(\"USD{templateName}:latest\", \"USD{templateName}-staging:latest\") 10 } } } } } } }", "oc create -f nodejs-sample-pipeline.yaml", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml", "oc start-build nodejs-sample-pipeline", "FROM registry.redhat.io/rhel8/buildah In this example, `/tmp/build` contains the inputs that build when this custom builder image is run. Normally the custom builder image fetches this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh /usr/bin/build.sh contains the actual custom build logic that will be run when this custom builder image is run. ENTRYPOINT [\"/usr/bin/build.sh\"]", "FROM registry.access.redhat.com/ubi9/ubi RUN touch /tmp/build", "#!/bin/sh Note that in this case the build inputs are part of the custom builder image, but normally this is retrieved from an external source. cd /tmp/input OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom build framework TAG=\"USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}\" performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . buildah requires a slight modification to the push secret provided by the service account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo \"{ \\\"auths\\\": \" ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo \"}\") > /tmp/.dockercfg push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG}", "oc new-build --binary --strategy=docker --name custom-builder-image", "oc start-build custom-builder-image --from-dir . -F", "kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest", "oc create -f buildconfig.yaml", "kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {}", "oc create -f imagestream.yaml", "oc start-build sample-custom-build -F", "oc start-build <buildconfig_name>", "oc start-build --from-build=<build_name>", "oc start-build <buildconfig_name> --follow", "oc start-build <buildconfig_name> --env=<key>=<value>", "oc start-build hello-world --from-repo=../hello-world --commit=v2", "oc cancel-build <build_name>", "oc cancel-build <build1_name> <build2_name> <build3_name>", "oc cancel-build bc/<buildconfig_name>", "oc cancel-build bc/<buildconfig_name>", "oc delete bc <BuildConfigName>", "oc delete --cascade=false bc <BuildConfigName>", "oc describe build <build_name>", "oc describe build <build_name>", "oc logs -f bc/<buildconfig_name>", "oc logs --version=<number> bc/<buildconfig_name>", "sourceStrategy: env: - name: \"BUILD_LOGLEVEL\" value: \"2\" 1", "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "- kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx", "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "oc describe bc/<name-of-your-BuildConfig>", "<https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "curl -H \"X-GitHub-Event: push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "type: \"GitLab\" gitlab: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "oc describe bc <name>", "curl -H \"X-GitLab-Event: Push Hook\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "type: \"Bitbucket\" bitbucket: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "oc describe bc <name>", "curl -H \"X-Event-Key: repo:push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "type: \"Generic\" generic: secretReference: name: \"mysecret\" allowEnv: true 1", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "git: uri: \"<url to git repository>\" ref: \"<optional git reference>\" commit: \"<commit hash identifying a specific git commit>\" author: name: \"<author name>\" email: \"<author e-mail>\" committer: name: \"<committer name>\" email: \"<committer e-mail>\" message: \"<commit message>\" env: 1 - name: \"<variable name>\" value: \"<variable value>\"", "curl -H \"Content-Type: application/yaml\" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "oc describe bc <name>", "kind: \"ImageStream\" apiVersion: \"v1\" metadata: name: \"ruby-20-centos7\"", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\"", "type: \"ImageChange\" 1 imageChange: {} type: \"ImageChange\" 2 imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\"", "strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>\"", "type: \"ImageChange\" imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\" paused: true", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: \"2021-06-30T13:47:53Z\" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1", "Then you use the `name` and `namespace` from that build to find the corresponding image change trigger in `buildConfig.spec.triggers`.", "type: \"ConfigChange\"", "oc set triggers bc <name> --from-github", "oc set triggers bc <name> --from-image='<image>'", "oc set triggers bc <name> --from-bitbucket --remove", "oc set triggers --help", "postCommit: script: \"bundle exec rake test --verbose\"", "postCommit: command: [\"/bin/bash\", \"-c\", \"bundle exec rake test --verbose\"]", "postCommit: command: [\"bundle\", \"exec\", \"rake\", \"test\"] args: [\"--verbose\"]", "oc set build-hook bc/mybc --post-commit --command -- bundle exec rake test --verbose", "oc set build-hook bc/mybc --post-commit --script=\"bundle exec rake test --verbose\"", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2", "resources: requests: 1 cpu: \"100m\" memory: \"256Mi\"", "spec: completionDeadlineSeconds: 1800", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: nodeSelector: 1 key1: value1 key2: value2", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: artifact-build spec: output: to: kind: ImageStreamTag name: artifact-image:latest source: git: uri: https://github.com/openshift/openshift-jee-sample.git ref: \"master\" strategy: sourceStrategy: from: kind: ImageStreamTag name: wildfly:10.1 namespace: openshift", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: image-build spec: output: to: kind: ImageStreamTag name: image-build:latest source: dockerfile: |- FROM jee-runtime:latest COPY ROOT.war /deployments/ROOT.war images: - from: 1 kind: ImageStreamTag name: artifact-image:latest paths: 2 - sourcePath: /wildfly/standalone/deployments/ROOT.war destinationDir: \".\" strategy: dockerStrategy: from: 3 kind: ImageStreamTag name: jee-runtime:latest triggers: - imageChange: {} type: ImageChange", "apiVersion: \"v1\" kind: \"BuildConfig\" metadata: name: \"sample-build\" spec: successfulBuildsHistoryLimit: 2 1 failedBuildsHistoryLimit: 2 2", "oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest -n openshift", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi namespace: openshift spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source", "oc tag --source=docker registry.redhat.io/ubi9/ubi:latest ubi:latest", "apiVersion: image.openshift.io/v1 kind: ImageStream metadata: name: ubi spec: tags: - from: kind: DockerImage name: registry.redhat.io/ubi9/ubi:latest name: latest referencePolicy: type: Source", "RUN rm /etc/rhsm-host", "strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi:latest volumes: - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement", "FROM registry.redhat.io/ubi9/ubi:latest RUN dnf search kernel-devel --showduplicates && dnf install -y kernel-devel", "[test-<name>] name=test-<number> baseurl = https://satellite.../content/dist/rhel/server/7/7Server/x86_64/os enabled=1 gpgcheck=0 sslverify=0 sslclientkey = /etc/pki/entitlement/...-key.pem sslclientcert = /etc/pki/entitlement/....pem", "oc create configmap yum-repos-d --from-file /path/to/satellite.repo", "strategy: dockerStrategy: from: kind: ImageStreamTag name: ubi:latest volumes: - name: yum-repos-d mounts: - destinationPath: /etc/yum.repos.d source: type: ConfigMap configMap: name: yum-repos-d - name: etc-pki-entitlement mounts: - destinationPath: /etc/pki/entitlement source: type: Secret secret: secretName: etc-pki-entitlement", "FROM registry.redhat.io/ubi9/ubi:latest RUN dnf search kernel-devel --showduplicates && dnf install -y kernel-devel", "oc apply -f - <<EOF apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: shared-resource-my-share namespace: my-namespace rules: - apiGroups: - sharedresource.openshift.io resources: - sharedsecrets resourceNames: - my-share verbs: - use EOF", "oc create rolebinding shared-resource-my-share --role=shared-resource-my-share --serviceaccount=my-namespace:builder", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: my-csi-bc namespace: my-csi-app-namespace spec: runPolicy: Serial source: dockerfile: | FROM registry.redhat.io/ubi9/ubi:latest RUN ls -la /etc/pki/entitlement RUN rm /etc/rhsm-host RUN yum repolist --disablerepo=* RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms RUN yum -y update RUN yum install -y openshift-clients.x86_64 strategy: type: Docker dockerStrategy: volumes: - mounts: - destinationPath: \"/etc/pki/entitlement\" name: my-csi-shared-secret source: csi: driver: csi.sharedresource.openshift.io readOnly: true volumeAttributes: sharedSecret: my-share-bc type: CSI", "oc start-build my-csi-bc -F", "build.build.openshift.io/my-csi-bc-1 started Caching blobs under \"/var/cache/blobs\". Pulling image registry.redhat.io/ubi9/ubi:latest Trying to pull registry.redhat.io/ubi9/ubi:latest Getting image source signatures Copying blob sha256:5dcbdc60ea6b60326f98e2b49d6ebcb7771df4b70c6297ddf2d7dede6692df6e Copying blob sha256:8671113e1c57d3106acaef2383f9bbfe1c45a26eacb03ec82786a494e15956c3 Copying config sha256:b81e86a2cb9a001916dc4697d7ed4777a60f757f0b8dcc2c4d8df42f2f7edb3a Writing manifest to image destination Storing signatures Adding transient rw bind mount for /run/secrets/rhsm STEP 1/9: FROM registry.redhat.io/ubi9/ubi:latest STEP 2/9: RUN ls -la /etc/pki/entitlement total 360 drwxrwxrwt. 2 root root 80 Feb 3 20:28 . drwxr-xr-x. 10 root root 154 Jan 27 15:53 .. -rw-r--r--. 1 root root 3243 Feb 3 20:28 entitlement-key.pem -rw-r--r--. 1 root root 362540 Feb 3 20:28 entitlement.pem time=\"2022-02-03T20:28:32Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> 1ef7c6d8c1a STEP 3/9: RUN rm /etc/rhsm-host time=\"2022-02-03T20:28:33Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> b1c61f88b39 STEP 4/9: RUN yum repolist --disablerepo=* Updating Subscription Management repositories. --> b067f1d63eb STEP 5/9: RUN subscription-manager repos --enable rhocp-4.9-for-rhel-8-x86_64-rpms Repository 'rhocp-4.9-for-rhel-8-x86_64-rpms' is enabled for this system. time=\"2022-02-03T20:28:40Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> 03927607ebd STEP 6/9: RUN yum -y update Updating Subscription Management repositories. Upgraded: systemd-239-51.el8_5.3.x86_64 systemd-libs-239-51.el8_5.3.x86_64 systemd-pam-239-51.el8_5.3.x86_64 Installed: diffutils-3.6-6.el8.x86_64 libxkbcommon-0.9.1-1.el8.x86_64 xkeyboard-config-2.28-1.el8.noarch Complete! time=\"2022-02-03T20:29:05Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> db57e92ff63 STEP 7/9: RUN yum install -y openshift-clients.x86_64 Updating Subscription Management repositories. Installed: bash-completion-1:2.7-5.el8.noarch libpkgconf-1.4.2-1.el8.x86_64 openshift-clients-4.9.0-202201211735.p0.g3f16530.assembly.stream.el8.x86_64 pkgconf-1.4.2-1.el8.x86_64 pkgconf-m4-1.4.2-1.el8.noarch pkgconf-pkg-config-1.4.2-1.el8.x86_64 Complete! time=\"2022-02-03T20:29:19Z\" level=warning msg=\"Adding metacopy option, configured globally\" --> 609507b059e STEP 8/9: ENV \"OPENSHIFT_BUILD_NAME\"=\"my-csi-bc-1\" \"OPENSHIFT_BUILD_NAMESPACE\"=\"my-csi-app-namespace\" --> cab2da3efc4 STEP 9/9: LABEL \"io.openshift.build.name\"=\"my-csi-bc-1\" \"io.openshift.build.namespace\"=\"my-csi-app-namespace\" COMMIT temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca --> 821b582320b Successfully tagged temp.builder.openshift.io/my-csi-app-namespace/my-csi-bc-1:edfe12ca 821b582320b41f1d7bab4001395133f86fa9cc99cc0b2b64c5a53f2b6750db91 Build complete, no image push requested", "oc annotate clusterrolebinding.rbac system:build-strategy-docker-binding 'rbac.authorization.kubernetes.io/autoupdate=false' --overwrite", "oc adm policy remove-cluster-role-from-group system:build-strategy-docker system:authenticated", "oc get clusterrole admin -o yaml | grep \"builds/docker\"", "oc get clusterrole edit -o yaml | grep \"builds/docker\"", "oc adm policy add-cluster-role-to-user system:build-strategy-docker devuser", "oc adm policy add-role-to-user system:build-strategy-docker devuser -n devproject", "oc edit build.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Build 1 metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-05-17T13:44:26Z\" generation: 2 name: cluster resourceVersion: \"107233\" selfLink: /apis/config.openshift.io/v1/builds/cluster uid: e2e9cc14-78a9-11e9-b92b-06d6c7da38dc spec: buildDefaults: 2 defaultProxy: 3 httpProxy: http://proxy.com httpsProxy: https://proxy.com noProxy: internal.com env: 4 - name: envkey value: envvalue gitProxy: 5 httpProxy: http://gitproxy.com httpsProxy: https://gitproxy.com noProxy: internalgit.com imageLabels: 6 - name: labelkey value: labelvalue resources: 7 limits: cpu: 100m memory: 50Mi requests: cpu: 10m memory: 10Mi buildOverrides: 8 imageLabels: 9 - name: labelkey value: labelvalue nodeSelector: 10 selectorkey: selectorvalue tolerations: 11 - effect: NoSchedule key: node-role.kubernetes.io/builds operator: Exists", "requested access to the resource is denied", "oc describe quota", "secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60", "oc delete secret <secret_name>", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-", "oc create configmap registry-cas -n openshift-config --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt", "oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-cas\"}}}' --type=merge" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/builds/index
Chapter 8. AWS Local Zone or Wavelength Zone tasks
Chapter 8. AWS Local Zone or Wavelength Zone tasks After installing OpenShift Container Platform on Amazon Web Services (AWS), you can further configure AWS Local Zones or Wavelength Zones and an edge compute pool. 8.1. Extend existing clusters to use AWS Local Zones or Wavelength Zones As a post-installation task, you can extend an existing OpenShift Container Platform cluster on Amazon Web Services (AWS) to use AWS Local Zones or Wavelength Zones. Extending nodes to Local Zones or Wavelength Zones locations comprises the following steps: Adjusting the cluster-network maximum transmission unit (MTU). Opting in the Local Zones or Wavelength Zones group to AWS Local Zones or Wavelength Zones. Creating a subnet in the existing VPC for a Local Zones or Wavelength Zones location. Important Before you extend an existing OpenShift Container Platform cluster on AWS to use Local Zones or Wavelength Zones, check that the existing VPC contains available Classless Inter-Domain Routing (CIDR) blocks. These blocks are needed for creating the subnets. Creating the machine set manifest, and then creating a node in each Local Zone or Wavelength Zone location. Local Zones only: Adding the permission ec2:ModifyAvailabilityZoneGroup to the Identity and Access Management (IAM) user or role, so that the required network resources can be created. For example: Example of an additional IAM policy for AWS Local Zones deployments { "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:ModifyAvailabilityZoneGroup" ], "Effect": "Allow", "Resource": "*" } ] } Wavelength Zone only: Adding the permissions ec2:ModifyAvailabilityZoneGroup , ec2:CreateCarrierGateway , and ec2:DeleteCarrierGateway to the Identity and Access Management (IAM) user or role, so that the required network resources can be created. For example: Example of an additional IAM policy for AWS Wavelength Zones deployments { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DeleteCarrierGateway", "ec2:CreateCarrierGateway" ], "Resource": "*" }, { "Action": [ "ec2:ModifyAvailabilityZoneGroup" ], "Effect": "Allow", "Resource": "*" } ] } Additional resources For more information about AWS Local Zones, the supported instances types, and services, see AWS Local Zones features in the AWS documentation. For more information about AWS Local Zones, the supported instances types, and services, see AWS Wavelength features in the AWS documentation. 8.1.1. About edge compute pools Edge compute nodes are tainted compute nodes that run in AWS Local Zones or Wavelength Zones locations. When deploying a cluster that uses Local Zones or Wavelength Zones, consider the following points: Amazon EC2 instances in the Local Zones or Wavelength Zones are more expensive than Amazon EC2 instances in the Availability Zones. The latency is lower between the applications running in AWS Local Zones or Wavelength Zones and the end user. A latency impact exists for some workloads if, for example, ingress traffic is mixed between Local Zones or Wavelength Zones and Availability Zones. Important Generally, the maximum transmission unit (MTU) between an Amazon EC2 instance in a Local Zones or Wavelength Zones and an Amazon EC2 instance in the Region is 1300. The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by the network plugin. For example: OVN-Kubernetes has an overhead of 100 bytes . The network plugin can provide additional features, such as IPsec, that also affect the MTU sizing. You can access the following resources to learn more about a respective zone type: See How Local Zones work in the AWS documentation. See How AWS Wavelength work in the AWS documentation. OpenShift Container Platform 4.12 introduced a new compute pool, edge , that is designed for use in remote zones. The edge compute pool configuration is common between AWS Local Zones or Wavelength Zones locations. Because of the type and size limitations of resources like EC2 and EBS on Local Zones or Wavelength Zones resources, the default instance type can vary from the traditional compute pool. The default Elastic Block Store (EBS) for Local Zones or Wavelength Zones locations is gp2 , which differs from the non-edge compute pool. The instance type used for each Local Zones or Wavelength Zones on an edge compute pool also might differ from other compute pools, depending on the instance offerings on the zone. The edge compute pool creates new labels that developers can use to deploy applications onto AWS Local Zones or Wavelength Zones nodes. The new labels are: node-role.kubernetes.io/edge='' Local Zones only: machine.openshift.io/zone-type=local-zone Wavelength Zones only: machine.openshift.io/zone-type=wavelength-zone machine.openshift.io/zone-group=USDZONE_GROUP_NAME By default, the machine sets for the edge compute pool define the taint of NoSchedule to prevent other workloads from spreading on Local Zones or Wavelength Zones instances. Users can only run user workloads if they define tolerations in the pod specification. 8.2. Changing the cluster network MTU to support Local Zones or Wavelength Zones You might need to change the maximum transmission unit (MTU) value for the cluster network so that your cluster infrastructure can support Local Zones or Wavelength Zones subnets. 8.2.1. About the cluster MTU During installation the maximum transmission unit (MTU) for the cluster network is detected automatically based on the MTU of the primary network interface of nodes in the cluster. You do not usually need to override the detected MTU. You might want to change the MTU of the cluster network for several reasons: The MTU detected during cluster installation is not correct for your infrastructure. Your cluster infrastructure now requires a different MTU, such as from the addition of nodes that need a different MTU for optimal performance. Only the OVN-Kubernetes cluster network plugin supports changing the MTU value. 8.2.1.1. Service interruption considerations When you initiate an MTU change on your cluster the following effects might impact service availability: At least two rolling reboots are required to complete the migration to a new MTU. During this time, some nodes are not available as they restart. Specific applications deployed to the cluster with shorter timeout intervals than the absolute TCP timeout interval might experience disruption during the MTU change. 8.2.1.2. MTU value selection When planning your MTU migration there are two related but distinct MTU values to consider. Hardware MTU : This MTU value is set based on the specifics of your network infrastructure. Cluster network MTU : This MTU value is always less than your hardware MTU to account for the cluster network overlay overhead. The specific overhead is determined by your network plugin. For OVN-Kubernetes, the overhead is 100 bytes. If your cluster requires different MTU values for different nodes, you must subtract the overhead value for your network plugin from the lowest MTU value that is used by any node in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . Important To avoid selecting an MTU value that is not acceptable by a node, verify the maximum MTU value ( maxmtu ) that is accepted by the network interface by using the ip -d link command. 8.2.1.3. How the migration process works The following table summarizes the migration process by segmenting between the user-initiated steps in the process and the actions that the migration performs in response. Table 8.1. Live migration of the cluster MTU User-initiated steps OpenShift Container Platform activity Set the following values in the Cluster Network Operator configuration: spec.migration.mtu.machine.to spec.migration.mtu.network.from spec.migration.mtu.network.to Cluster Network Operator (CNO) : Confirms that each field is set to a valid value. The mtu.machine.to must be set to either the new hardware MTU or to the current hardware MTU if the MTU for the hardware is not changing. This value is transient and is used as part of the migration process. Separately, if you specify a hardware MTU that is different from your existing hardware MTU value, you must manually configure the MTU to persist by other means, such as with a machine config, DHCP setting, or a Linux kernel command line. The mtu.network.from field must equal the network.status.clusterNetworkMTU field, which is the current MTU of the cluster network. The mtu.network.to field must be set to the target cluster network MTU and must be lower than the hardware MTU to allow for the overlay overhead of the network plugin. For OVN-Kubernetes, the overhead is 100 bytes. If the values provided are valid, the CNO writes out a new temporary configuration with the MTU for the cluster network set to the value of the mtu.network.to field. Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster. Reconfigure the MTU of the primary network interface for the nodes on the cluster. You can use a variety of methods to accomplish this, including: Deploying a new NetworkManager connection profile with the MTU change Changing the MTU through a DHCP server setting Changing the MTU through boot parameters N/A Set the mtu value in the CNO configuration for the network plugin and set spec.migration to null . Machine Config Operator (MCO) : Performs a rolling reboot of each node in the cluster with the new MTU configuration. 8.2.1.4. Changing the cluster network MTU As a cluster administrator, you can increase or decrease the maximum transmission unit (MTU) for your cluster. Important The migration is disruptive and nodes in your cluster might be temporarily unavailable as the MTU update takes effect. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster using an account with cluster-admin permissions. You have identified the target MTU for your cluster. The MTU for the OVN-Kubernetes network plugin must be set to 100 less than the lowest hardware MTU value in your cluster. Procedure To obtain the current MTU for the cluster network, enter the following command: USD oc describe network.config cluster Example output ... Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23 ... To begin the MTU migration, specify the migration configuration by entering the following command. The Machine Config Operator performs a rolling reboot of the nodes in the cluster in preparation for the MTU change. USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": <overlay_from>, "to": <overlay_to> } , "machine": { "to" : <machine_to> } } } } }' where: <overlay_from> Specifies the current cluster network MTU value. <overlay_to> Specifies the target MTU for the cluster network. This value is set relative to the value of <machine_to> . For OVN-Kubernetes, this value must be 100 less than the value of <machine_to> . <machine_to> Specifies the MTU for the primary network interface on the underlying host network. Example that increases the cluster MTU USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": { "mtu": { "network": { "from": 1400, "to": 9000 } , "machine": { "to" : 9100} } } } }' As the Machine Config Operator updates machines in each machine config pool, it reboots each node one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Note By default, the Machine Config Operator updates one machine per pool at a time, causing the total time the migration takes to increase with the size of the cluster. Confirm the status of the new machine configuration on the hosts: To list the machine configuration state and the name of the applied machine configuration, enter the following command: USD oc describe node | egrep "hostname|machineconfig" Example output kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done Verify that the following statements are true: The value of machineconfiguration.openshift.io/state field is Done . The value of the machineconfiguration.openshift.io/currentConfig field is equal to the value of the machineconfiguration.openshift.io/desiredConfig field. To confirm that the machine config is correct, enter the following command: USD oc get machineconfig <config_name> -o yaml | grep ExecStart where <config_name> is the name of the machine config from the machineconfiguration.openshift.io/currentConfig field. The machine config must include the following update to the systemd configuration: ExecStart=/usr/local/bin/mtu-migration.sh To finalize the MTU migration, enter the following command for the OVN-Kubernetes network plugin: USD oc patch Network.operator.openshift.io cluster --type=merge --patch \ '{"spec": { "migration": null, "defaultNetwork":{ "ovnKubernetesConfig": { "mtu": <mtu> }}}}' where: <mtu> Specifies the new cluster network MTU that you specified with <overlay_to> . After finalizing the MTU migration, each machine config pool node is rebooted one by one. You must wait until all the nodes are updated. Check the machine config pool status by entering the following command: USD oc get machineconfigpools A successfully updated node has the following status: UPDATED=true , UPDATING=false , DEGRADED=false . Verification Verify that the node in your cluster uses the MTU that you specified by entering the following command: USD oc describe network.config cluster 8.2.2. Opting in to AWS Local Zones or Wavelength Zones If you plan to create subnets in AWS Local Zones or Wavelength Zones, you must opt in to each zone group separately. Prerequisites You have installed the AWS CLI. You have determined an AWS Region for where you want to deploy your OpenShift Container Platform cluster. You have attached a permissive IAM policy to a user or role account that opts in to the zone group. Procedure List the zones that are available in your AWS Region by running the following command: Example command for listing available AWS Local Zones in an AWS Region USD aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=local-zone \ --all-availability-zones Example command for listing available AWS Wavelength Zones in an AWS Region USD aws --region "<value_of_AWS_Region>" ec2 describe-availability-zones \ --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' \ --filters Name=zone-type,Values=wavelength-zone \ --all-availability-zones Depending on the AWS Region, the list of available zones might be long. The command returns the following fields: ZoneName The name of the Local Zones or Wavelength Zones. GroupName The group that comprises the zone. To opt in to the Region, save the name. Status The status of the Local Zones or Wavelength Zones group. If the status is not-opted-in , you must opt in the GroupName as described in the step. Opt in to the zone group on your AWS account by running the following command: USD aws ec2 modify-availability-zone-group \ --group-name "<value_of_GroupName>" \ 1 --opt-in-status opted-in 1 Replace <value_of_GroupName> with the name of the group of the Local Zones or Wavelength Zones where you want to create subnets. 8.2.3. Create network requirements in an existing VPC that uses AWS Local Zones or Wavelength Zones If you want a Machine API to create an Amazon EC2 instance in a remote zone location, you must create a subnet in a Local Zones or Wavelength Zones location. You can use any provisioning tool, such as Ansible or Terraform, to create subnets in the existing Virtual Private Cloud (VPC). You can configure the CloudFormation template to meet your requirements. The following subsections include steps that use CloudFormation templates to create the network requirements that extend an existing VPC to use an AWS Local Zones or Wavelength Zones. Extending nodes to Local Zones requires that you create the following resources: 2 VPC Subnets: public and private. The public subnet associates to the public route table for the regular Availability Zones in the Region. The private subnet associates to the provided route table ID. Extending nodes to Wavelength Zones requires that you create the following resources: 1 VPC Carrier Gateway associated to the provided VPC ID. 1 VPC Route Table for Wavelength Zones with a default route entry to VPC Carrier Gateway. 2 VPC Subnets: public and private. The public subnet associates to the public route table for an AWS Wavelength Zone. The private subnet associates to the provided route table ID. Important Considering the limitation of NAT Gateways in Wavelength Zones, the provided CloudFormation templates support only associating the private subnets with the provided route table ID. A route table ID is attached to a valid NAT Gateway in the AWS Region. 8.2.4. Wavelength Zones only: Creating a VPC carrier gateway To use public subnets in your OpenShift Container Platform cluster that runs on Wavelength Zones, you must create the carrier gateway and associate the carrier gateway to the VPC. Subnets are useful for deploying load balancers or edge compute nodes. To create edge nodes or internet-facing load balancers in Wavelength Zones locations for your OpenShift Container Platform cluster, you must create the following required network components: A carrier gateway that associates to the existing VPC. A carrier route table that lists route entries. A subnet that associates to the carrier route table. Carrier gateways exist for VPCs that only contain subnets in a Wavelength Zone. The following list explains the functions of a carrier gateway in the context of an AWS Wavelength Zones location: Provides connectivity between your Wavelength Zone and the carrier network, which includes any available devices from the carrier network. Performs Network Address Translation (NAT) functions, such as translating IP addresses that are public IP addresses stored in a network border group, from Wavelength Zones to carrier IP addresses. These translation functions apply to inbound and outbound traffic. Authorizes inbound traffic from a carrier network that is located in a specific location. Authorizes outbound traffic to a carrier network and the internet. Note No inbound connection configuration exists from the internet to a Wavelength Zone through the carrier gateway. You can use the provided CloudFormation template to create a stack of the following AWS resources: One carrier gateway that associates to the VPC ID in the template. One public route table for the Wavelength Zone named as <ClusterName>-public-carrier . Default IPv4 route entry in the new route table that targets the carrier gateway. VPC gateway endpoint for an AWS Simple Storage Service (S3). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . Procedure Go to the section of the documentation named "CloudFormation template for the VPC Carrier Gateway", and then copy the syntax from the CloudFormation template for VPC Carrier Gateway template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \// ParameterKey=VpcId,ParameterValue="USD{VpcId}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{ClusterName}" 4 1 <stack_name> is the name for the CloudFormation stack, such as clusterName-vpc-carrier-gw . You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 <VpcId> is the VPC ID extracted from the CloudFormation stack output created in the section named "Creating a VPC in AWS". 4 <ClusterName> is a custom value that prefixes to resources that the CloudFormation stack creates. You can use the same name that is defined in the metadata.name section of the install-config.yaml configuration file. Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f Verification Confirm that the CloudFormation template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameter. Ensure that you provide the parameter value to the other CloudFormation templates that you run to create for your cluster. PublicRouteTableId The ID of the Route Table in the Carrier infrastructure. 8.2.5. Wavelength Zones only: CloudFormation template for the VPC Carrier Gateway You can use the following CloudFormation template to deploy the Carrier Gateway on AWS Wavelength infrastructure. Example 8.1. CloudFormation template for VPC Carrier Gateway AWSTemplateFormatVersion: 2010-09-09 Description: Template for Creating Wavelength Zone Gateway (Carrier Gateway). Parameters: VpcId: Description: VPC ID to associate the Carrier Gateway. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster Name or Prefix name to prepend the tag Name for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. Resources: CarrierGateway: Type: "AWS::EC2::CarrierGateway" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "cagw"]] PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public-carrier"]] PublicRoute: Type: "AWS::EC2::Route" DependsOn: CarrierGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 CarrierGatewayId: !Ref CarrierGateway S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VpcId Outputs: PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable 8.2.6. Creating subnets for AWS edge compute services Before you configure a machine set for edge compute nodes in your OpenShift Container Platform cluster, you must create a subnet in Local Zones or Wavelength Zones. Complete the following procedure for each Wavelength Zone that you want to deploy compute nodes to. You can use the provided CloudFormation template and create a CloudFormation stack. You can then use this stack to custom provision a subnet. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You opted in to the Local Zones or Wavelength Zones group. Procedure Go to the section of the documentation named "CloudFormation template for the VPC subnet", and copy the syntax from the template. Save the copied template syntax as a YAML file on your local system. This template describes the VPC that your cluster requires. Run the following command to deploy the CloudFormation template, which creates a stack of AWS resources that represent the VPC: USD aws cloudformation create-stack --stack-name <stack_name> \ 1 --region USD{CLUSTER_REGION} \ --template-body file://<template>.yaml \ 2 --parameters \ ParameterKey=VpcId,ParameterValue="USD{VPC_ID}" \ 3 ParameterKey=ClusterName,ParameterValue="USD{CLUSTER_NAME}" \ 4 ParameterKey=ZoneName,ParameterValue="USD{ZONE_NAME}" \ 5 ParameterKey=PublicRouteTableId,ParameterValue="USD{ROUTE_TABLE_PUB}" \ 6 ParameterKey=PublicSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PUB}" \ 7 ParameterKey=PrivateRouteTableId,ParameterValue="USD{ROUTE_TABLE_PVT}" \ 8 ParameterKey=PrivateSubnetCidr,ParameterValue="USD{SUBNET_CIDR_PVT}" 9 1 <stack_name> is the name for the CloudFormation stack, such as cluster-wl-<local_zone_shortname> for Local Zones and cluster-wl-<wavelength_zone_shortname> for Wavelength Zones. You need the name of this stack if you remove the cluster. 2 <template> is the relative path and the name of the CloudFormation template YAML file that you saved. 3 USD{VPC_ID} is the VPC ID, which is the value VpcID in the output of the CloudFormation template for the VPC. 4 USD{CLUSTER_NAME} is the value of ClusterName to be used as a prefix of the new AWS resource names. 5 USD{ZONE_NAME} is the value of Local Zones or Wavelength Zones name to create the subnets. 6 USD{ROUTE_TABLE_PUB} is the Public Route Table Id extracted from the CloudFormation template. For Local Zones, the public route table is extracted from the VPC CloudFormation Stack. For Wavelength Zones, the value must be extracted from the output of the VPC's carrier gateway CloudFormation stack. 7 USD{SUBNET_CIDR_PUB} is a valid CIDR block that is used to create the public subnet. This block must be part of the VPC CIDR block VpcCidr . 8 USD{ROUTE_TABLE_PVT} is the PrivateRouteTableId extracted from the output of the VPC's CloudFormation stack. 9 USD{SUBNET_CIDR_PVT} is a valid CIDR block that is used to create the private subnet. This block must be part of the VPC CIDR block VpcCidr . Example output arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f Verification Confirm that the template components exist by running the following command: USD aws cloudformation describe-stacks --stack-name <stack_name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters: PublicSubnetId The IDs of the public subnet created by the CloudFormation stack. PrivateSubnetId The IDs of the private subnet created by the CloudFormation stack. Ensure that you provide these parameter values to the other CloudFormation templates that you run to create for your cluster. 8.2.7. CloudFormation template for the VPC subnet You can use the following CloudFormation template to deploy the private and public subnets in a zone on Local Zones or Wavelength Zones infrastructure. Example 8.2. CloudFormation template for VPC subnets AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\b|(?:[0-9]{1,3}\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: ".+" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: ".+" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: ".+" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "public", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, "private", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join ["", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join ["", [!Ref PrivateSubnet]] 8.2.8. Creating a machine set manifest for an AWS Local Zones or Wavelength Zones node After you create subnets in AWS Local Zones or Wavelength Zones, you can create a machine set manifest. The installation program sets the following labels for the edge machine pools at cluster installation time: machine.openshift.io/parent-zone-name: <value_of_ParentZoneName> machine.openshift.io/zone-group: <value_of_ZoneGroup> machine.openshift.io/zone-type: <value_of_ZoneType> The following procedure details how you can create a machine set configuraton that matches the edge compute pool configuration. Prerequisites You have created subnets in AWS Local Zones or Wavelength Zones. Procedure Manually preserve edge machine pool labels when creating the machine set manifest by gathering the AWS API. To complete this action, enter the following command in your command-line interface (CLI): USD aws ec2 describe-availability-zones --region <value_of_Region> \ 1 --query 'AvailabilityZones[].{ ZoneName: ZoneName, ParentZoneName: ParentZoneName, GroupName: GroupName, ZoneType: ZoneType}' \ --filters Name=zone-name,Values=<value_of_ZoneName> \ 2 --all-availability-zones 1 For <value_of_Region> , specify the name of the region for the zone. 2 For <value_of_ZoneName> , specify the name of the Local Zones or Wavelength Zones. Example output for Local Zone us-east-1-nyc-1a [ { "ZoneName": "us-east-1-nyc-1a", "ParentZoneName": "us-east-1f", "GroupName": "us-east-1-nyc-1", "ZoneType": "local-zone" } ] Example output for Wavelength Zone us-east-1-wl1 [ { "ZoneName": "us-east-1-wl1-bos-wlz-1", "ParentZoneName": "us-east-1a", "GroupName": "us-east-1-wl1", "ZoneType": "wavelength-zone" } ] 8.2.8.1. Sample YAML for a compute machine set custom resource on AWS This sample YAML defines a compute machine set that runs in the us-east-1-nyc-1a Amazon Web Services (AWS) zone and creates nodes that are labeled with node-role.kubernetes.io/edge: "" . Note If you want to reference the sample YAML file in the context of Wavelength Zones, ensure that you replace the AWS Region and zone information with supported Wavelength Zone values. In this sample, <infrastructure_id> is the infrastructure ID label that is based on the cluster ID that you set when you provisioned the cluster, and <edge> is the node label to add. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-edge-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-edge-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: edge 3 machine.openshift.io/cluster-api-machine-type: edge machine.openshift.io/cluster-api-machineset: <infrastructure_id>-edge-<zone> spec: metadata: labels: machine.openshift.io/parent-zone-name: <value_of_ParentZoneName> machine.openshift.io/zone-group: <value_of_GroupName> machine.openshift.io/zone-type: <value_of_ZoneType> node-role.kubernetes.io/edge: "" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: id: <value_of_PublicSubnetIds> 7 publicIp: true tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data taints: 9 - key: node-role.kubernetes.io/edge effect: NoSchedule 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the infrastructure ID, edge role node label, and zone name. 3 Specify the edge role node label. 4 Specify a valid Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS zone for your OpenShift Container Platform nodes. If you want to use an AWS Marketplace image, you must complete the OpenShift Container Platform subscription from the AWS Marketplace to obtain an AMI ID for your region. USD oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{"\n"}' \ get machineset/<infrastructure_id>-<role>-<zone> 5 Specify the zone name, for example, us-east-1-nyc-1a . 6 Specify the region, for example, us-east-1 . 7 The ID of the public subnet that you created in AWS Local Zones or Wavelength Zones. You created this public subnet ID when you finished the procedure for "Creating a subnet in an AWS zone". 8 Optional: Specify custom tag data for your cluster. For example, you might add an admin contact email address by specifying a name:value pair of Email:[email protected] . Note Custom tags can also be specified during installation in the install-config.yml file. If the install-config.yml file and the machine set include a tag with the same name data, the value for the tag from the machine set takes priority over the value for the tag in the install-config.yml file. 9 Specify a taint to prevent user workloads from being scheduled on edge nodes. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 8.2.8.2. Creating a compute machine set In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice. Prerequisites Deploy an OpenShift Container Platform cluster. Install the OpenShift CLI ( oc ). Log in to oc as a user with cluster-admin permission. Procedure Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named <file_name>.yaml . Ensure that you set the <clusterID> and <role> parameter values. Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster. To list the compute machine sets in your cluster, run the following command: USD oc get machinesets -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m To view values of a specific compute machine set custom resource (CR), run the following command: USD oc get machineset <machineset_name> \ -n openshift-machine-api -o yaml Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3 ... 1 The cluster infrastructure ID. 2 A default node label. Note For clusters that have user-provisioned infrastructure, a compute machine set can only create worker and infra type machines. 3 The values in the <providerSpec> section of the compute machine set CR are platform-specific. For more information about <providerSpec> parameters in the CR, see the sample compute machine set CR configuration for your provider. Create a MachineSet CR by running the following command: USD oc create -f <file_name>.yaml Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-edge-us-east-1-nyc-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m When the new compute machine set is available, the DESIRED and CURRENT values match. If the compute machine set is not available, wait a few minutes and run the command again. Optional: To check nodes that were created by the edge machine, run the following command: USD oc get nodes -l node-role.kubernetes.io/edge Example output NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f Additional resources Installing a cluster on AWS with compute nodes on AWS Local Zones Installing a cluster on AWS with compute nodes on AWS Wavelength Zones 8.3. Creating user workloads in AWS Local Zones or Wavelength Zones After you create an Amazon Web Service (AWS) Local Zones or Wavelength Zones infrastructure and deploy your cluster, you can use edge compute nodes to create user workloads in Local Zones or Wavelength Zones subnets. When you use the installation program to create a cluster, the installation program automatically specifies a taint effect of NoSchedule to each edge compute node. This means that a scheduler does not add a new pod, or deployment, to a node if the pod does not match the specified tolerations for a taint. You can modify the taint for better control over how nodes create workloads in each Local Zones or Wavelength Zones subnet. The installation program creates the compute machine set manifests file with node-role.kubernetes.io/edge and node-role.kubernetes.io/worker labels applied to each edge compute node that is located in a Local Zones or Wavelength Zones subnet. Note The examples in the procedure are for a Local Zones infrastructure. If you are working with a Wavelength Zones infrastructure, ensure you adapt the examples to what is supported in this infrastructure. Prerequisites You have access to the OpenShift CLI ( oc ). You deployed your cluster in a Virtual Private Cloud (VPC) with defined Local Zones or Wavelength Zones subnets. You ensured that the compute machine set for the edge compute nodes on Local Zones or Wavelength Zones subnets specifies the taints for node-role.kubernetes.io/edge . Procedure Create a deployment resource YAML file for an example application to be deployed in the edge compute node that operates in a Local Zones subnet. Ensure that you specify the correct tolerations that match the taints for the edge compute node. Example of a configured deployment resource for an edge compute node that operates in a Local Zone subnet kind: Namespace apiVersion: v1 metadata: name: <local_zone_application_namespace> --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <pvc_name> namespace: <local_zone_application_namespace> spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: gp2-csi 1 volumeMode: Filesystem --- apiVersion: apps/v1 kind: Deployment 2 metadata: name: <local_zone_application> 3 namespace: <local_zone_application_namespace> 4 spec: selector: matchLabels: app: <local_zone_application> replicas: 1 template: metadata: labels: app: <local_zone_application> zone-group: USD{ZONE_GROUP_NAME} 5 spec: securityContext: seccompProfile: type: RuntimeDefault nodeSelector: 6 machine.openshift.io/zone-group: USD{ZONE_GROUP_NAME} tolerations: 7 - key: "node-role.kubernetes.io/edge" operator: "Equal" value: "" effect: "NoSchedule" containers: - image: openshift/origin-node command: - "/bin/socat" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \"printf \\\"HTTP/1.0 200 OK\r\n\r\n\\\"; sed -e \\\"/^\r/q\\\"\"' imagePullPolicy: Always name: echoserver ports: - containerPort: 8080 volumeMounts: - mountPath: "/mnt/storage" name: data volumes: - name: data persistentVolumeClaim: claimName: <pvc_name> 1 storageClassName : For the Local Zone configuration, you must specify gp2-csi . 2 kind : Defines the deployment resource. 3 name : Specifies the name of your Local Zone application. For example, local-zone-demo-app-nyc-1 . 4 namespace: Defines the namespace for the AWS Local Zone where you want to run the user workload. For example: local-zone-app-nyc-1a . 5 zone-group : Defines the group to where a zone belongs. For example, us-east-1-iah-1 . 6 nodeSelector : Targets edge compute nodes that match the specified labels. 7 tolerations : Sets the values that match with the taints defined on the MachineSet manifest for the Local Zone node. Create a service resource YAML file for the node. This resource exposes a pod from a targeted edge compute node to services that run inside your Local Zone network. Example of a configured service resource for an edge compute node that operates in a Local Zone subnet apiVersion: v1 kind: Service 1 metadata: name: <local_zone_application> namespace: <local_zone_application_namespace> spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: 2 app: <local_zone_application> 1 kind : Defines the service resource. 2 selector: Specifies the label type applied to managed pods. Additional resources Installing a cluster on AWS with compute nodes on AWS Local Zones Installing a cluster on AWS with compute nodes on AWS Wavelength Zones Understanding taints and tolerations 8.4. steps Optional: Use the AWS Load Balancer (ALB) Operator to expose a pod from a targeted edge compute node to services that run inside of a Local Zones or Wavelength Zones subnet from a public network. See Installing the AWS Load Balancer Operator .
[ "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DeleteCarrierGateway\", \"ec2:CreateCarrierGateway\" ], \"Resource\": \"*\" }, { \"Action\": [ \"ec2:ModifyAvailabilityZoneGroup\" ], \"Effect\": \"Allow\", \"Resource\": \"*\" } ] }", "oc describe network.config cluster", "Status: Cluster Network: Cidr: 10.217.0.0/22 Host Prefix: 23 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 10.217.4.0/23", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": <overlay_from>, \"to\": <overlay_to> } , \"machine\": { \"to\" : <machine_to> } } } } }'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": { \"mtu\": { \"network\": { \"from\": 1400, \"to\": 9000 } , \"machine\": { \"to\" : 9100} } } } }'", "oc get machineconfigpools", "oc describe node | egrep \"hostname|machineconfig\"", "kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done", "oc get machineconfig <config_name> -o yaml | grep ExecStart", "ExecStart=/usr/local/bin/mtu-migration.sh", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{\"spec\": { \"migration\": null, \"defaultNetwork\":{ \"ovnKubernetesConfig\": { \"mtu\": <mtu> }}}}'", "oc get machineconfigpools", "oc describe network.config cluster", "aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=local-zone --all-availability-zones", "aws --region \"<value_of_AWS_Region>\" ec2 describe-availability-zones --query 'AvailabilityZones[].[{ZoneName: ZoneName, GroupName: GroupName, Status: OptInStatus}]' --filters Name=zone-type,Values=wavelength-zone --all-availability-zones", "aws ec2 modify-availability-zone-group --group-name \"<value_of_GroupName>\" \\ 1 --opt-in-status opted-in", "aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters \\// ParameterKey=VpcId,ParameterValue=\"USD{VpcId}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{ClusterName}\" 4", "arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <stack_name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Creating Wavelength Zone Gateway (Carrier Gateway). Parameters: VpcId: Description: VPC ID to associate the Carrier Gateway. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster Name or Prefix name to prepend the tag Name for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. Resources: CarrierGateway: Type: \"AWS::EC2::CarrierGateway\" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"cagw\"]] PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VpcId Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public-carrier\"]] PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: CarrierGateway Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 CarrierGatewayId: !Ref CarrierGateway S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VpcId Outputs: PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable", "aws cloudformation create-stack --stack-name <stack_name> \\ 1 --region USD{CLUSTER_REGION} --template-body file://<template>.yaml \\ 2 --parameters ParameterKey=VpcId,ParameterValue=\"USD{VPC_ID}\" \\ 3 ParameterKey=ClusterName,ParameterValue=\"USD{CLUSTER_NAME}\" \\ 4 ParameterKey=ZoneName,ParameterValue=\"USD{ZONE_NAME}\" \\ 5 ParameterKey=PublicRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PUB}\" \\ 6 ParameterKey=PublicSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PUB}\" \\ 7 ParameterKey=PrivateRouteTableId,ParameterValue=\"USD{ROUTE_TABLE_PVT}\" \\ 8 ParameterKey=PrivateSubnetCidr,ParameterValue=\"USD{SUBNET_CIDR_PVT}\" 9", "arn:aws:cloudformation:us-east-1:123456789012:stack/<stack_name>/dbedae40-820e-11eb-2fd3-12a48460849f", "aws cloudformation describe-stacks --stack-name <stack_name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice Subnets (Public and Private) Parameters: VpcId: Description: VPC ID that comprises all the target subnets. Type: String AllowedPattern: ^(?:(?:vpc)(?:-[a-zA-Z0-9]+)?\\b|(?:[0-9]{1,3}\\.){3}[0-9]{1,3})USD ConstraintDescription: VPC ID must be with valid name, starting with vpc-.*. ClusterName: Description: Cluster name or prefix name to prepend the Name tag for each subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: ClusterName parameter must be specified. ZoneName: Description: Zone Name to create the subnets, such as us-west-2-lax-1a. Type: String AllowedPattern: \".+\" ConstraintDescription: ZoneName parameter must be specified. PublicRouteTableId: Description: Public Route Table ID to associate the public subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PublicRouteTableId parameter must be specified. PublicSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for public subnet. Type: String PrivateRouteTableId: Description: Private Route Table ID to associate the private subnet. Type: String AllowedPattern: \".+\" ConstraintDescription: PrivateRouteTableId parameter must be specified. PrivateSubnetCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.128.0/20 Description: CIDR block for private subnet. Type: String Resources: PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PublicSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"public\", !Ref ZoneName]] PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTableId PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VpcId CidrBlock: !Ref PrivateSubnetCidr AvailabilityZone: !Ref ZoneName Tags: - Key: Name Value: !Join ['-', [!Ref ClusterName, \"private\", !Ref ZoneName]] PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTableId Outputs: PublicSubnetId: Description: Subnet ID of the public subnets. Value: !Join [\"\", [!Ref PublicSubnet]] PrivateSubnetId: Description: Subnet ID of the private subnets. Value: !Join [\"\", [!Ref PrivateSubnet]]", "aws ec2 describe-availability-zones --region <value_of_Region> \\ 1 --query 'AvailabilityZones[].{ ZoneName: ZoneName, ParentZoneName: ParentZoneName, GroupName: GroupName, ZoneType: ZoneType}' --filters Name=zone-name,Values=<value_of_ZoneName> \\ 2 --all-availability-zones", "[ { \"ZoneName\": \"us-east-1-nyc-1a\", \"ParentZoneName\": \"us-east-1f\", \"GroupName\": \"us-east-1-nyc-1\", \"ZoneType\": \"local-zone\" } ]", "[ { \"ZoneName\": \"us-east-1-wl1-bos-wlz-1\", \"ParentZoneName\": \"us-east-1a\", \"GroupName\": \"us-east-1-wl1\", \"ZoneType\": \"wavelength-zone\" } ]", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-edge-<zone> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-edge-<zone> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: edge 3 machine.openshift.io/cluster-api-machine-type: edge machine.openshift.io/cluster-api-machineset: <infrastructure_id>-edge-<zone> spec: metadata: labels: machine.openshift.io/parent-zone-name: <value_of_ParentZoneName> machine.openshift.io/zone-group: <value_of_GroupName> machine.openshift.io/zone-type: <value_of_ZoneType> node-role.kubernetes.io/edge: \"\" providerSpec: value: ami: id: ami-046fe691f52a953f9 4 apiVersion: machine.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig placement: availabilityZone: <zone> 5 region: <region> 6 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node - filters: - name: tag:Name values: - <infrastructure_id>-lb subnet: id: <value_of_PublicSubnetIds> 7 publicIp: true tags: - name: kubernetes.io/cluster/<infrastructure_id> value: owned - name: <custom_tag_name> 8 value: <custom_tag_value> userDataSecret: name: worker-user-data taints: 9 - key: node-role.kubernetes.io/edge effect: NoSchedule", "oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster", "oc -n openshift-machine-api -o jsonpath='{.spec.template.spec.providerSpec.value.ami.id}{\"\\n\"}' get machineset/<infrastructure_id>-<role>-<zone>", "oc get machinesets -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get machineset <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-<role> 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: 3", "oc create -f <file_name>.yaml", "oc get machineset -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-edge-us-east-1-nyc-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m", "oc get nodes -l node-role.kubernetes.io/edge", "NAME STATUS ROLES AGE VERSION ip-10-0-207-188.ec2.internal Ready edge,worker 172m v1.25.2+d2e245f", "kind: Namespace apiVersion: v1 metadata: name: <local_zone_application_namespace> --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: <pvc_name> namespace: <local_zone_application_namespace> spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: gp2-csi 1 volumeMode: Filesystem --- apiVersion: apps/v1 kind: Deployment 2 metadata: name: <local_zone_application> 3 namespace: <local_zone_application_namespace> 4 spec: selector: matchLabels: app: <local_zone_application> replicas: 1 template: metadata: labels: app: <local_zone_application> zone-group: USD{ZONE_GROUP_NAME} 5 spec: securityContext: seccompProfile: type: RuntimeDefault nodeSelector: 6 machine.openshift.io/zone-group: USD{ZONE_GROUP_NAME} tolerations: 7 - key: \"node-role.kubernetes.io/edge\" operator: \"Equal\" value: \"\" effect: \"NoSchedule\" containers: - image: openshift/origin-node command: - \"/bin/socat\" args: - TCP4-LISTEN:8080,reuseaddr,fork - EXEC:'/bin/bash -c \\\"printf \\\\\\\"HTTP/1.0 200 OK\\r\\n\\r\\n\\\\\\\"; sed -e \\\\\\\"/^\\r/q\\\\\\\"\\\"' imagePullPolicy: Always name: echoserver ports: - containerPort: 8080 volumeMounts: - mountPath: \"/mnt/storage\" name: data volumes: - name: data persistentVolumeClaim: claimName: <pvc_name>", "apiVersion: v1 kind: Service 1 metadata: name: <local_zone_application> namespace: <local_zone_application_namespace> spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: NodePort selector: 2 app: <local_zone_application>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_aws/aws-compute-edge-zone-tasks
1.4. Setting the Wireless Regulatory Domain
1.4. Setting the Wireless Regulatory Domain In Red Hat Enterprise Linux, the crda package contains the Central Regulatory Domain Agent that provides the kernel with the wireless regulatory rules for a given jurisdiction. It is used by certain udev scripts and should not be run manually unless debugging udev scripts. The kernel runs crda by sending a udev event upon a new regulatory domain change. Regulatory domain changes are triggered by the Linux wireless subsystem (IEEE-802.11). This subsystem uses the regulatory.bin file to keep its regulatory database information. The setregdomain utility sets the regulatory domain for your system. Setregdomain takes no arguments and is usually called through system script such as udev rather than manually by the administrator. If a country code look-up fails, the system administrator can define the COUNTRY environment variable in the /etc/sysconfig/regdomain file. See the following man pages for more information about the regulatory domain: setregdomain(1) man page - Sets regulatory domain based on country code. crda(8) man page - Sends to the kernel a wireless regulatory domain for a given ISO or IEC 3166 alpha2. regulatory.bin(5) man page - Shows the Linux wireless regulatory database. iw(8) man page - Shows or manipulates wireless devices and their configuration.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-setting_wireless_regulatory_domain
5.13. Setting and Controlling IP sets using iptables
5.13. Setting and Controlling IP sets using iptables The essential differences between firewalld and the iptables (and ip6tables ) services are: The iptables service stores configuration in /etc/sysconfig/iptables and /etc/sysconfig/ip6tables , while firewalld stores it in various XML files in /usr/lib/firewalld/ and /etc/firewalld/ . Note that the /etc/sysconfig/iptables file does not exist as firewalld is installed by default on Red Hat Enterprise Linux. With the iptables service , every single change means flushing all the old rules and reading all the new rules from /etc/sysconfig/iptables , while with firewalld there is no recreating of all the rules. Only the differences are applied. Consequently, firewalld can change the settings during runtime without existing connections being lost. Both use iptables tool to talk to the kernel packet filter. To use the iptables and ip6tables services instead of firewalld , first disable firewalld by running the following command as root : Then install the iptables-services package by entering the following command as root : The iptables-services package contains the iptables service and the ip6tables service. Then, to start the iptables and ip6tables services, enter the following commands as root : To enable the services to start on every system start, enter the following commands: The ipset utility is used to administer IP sets in the Linux kernel. An IP set is a framework for storing IP addresses, port numbers, IP and MAC address pairs, or IP address and port number pairs. The sets are indexed in such a way that very fast matching can be made against a set even when the sets are very large. IP sets enable simpler and more manageable configurations as well as providing performance advantages when using iptables . The iptables matches and targets referring to sets create references which protect the given sets in the kernel. A set cannot be destroyed while there is a single reference pointing to it. The use of ipset enables iptables commands, such as those below, to be replaced by a set: The set is created as follows: The set is then referenced in an iptables command as follows: If the set is used more than once a saving in configuration time is made. If the set contains many entries a saving in processing time is made.
[ "~]# systemctl disable firewalld ~]# systemctl stop firewalld", "~]# yum install iptables-services", "~]# systemctl start iptables ~]# systemctl start ip6tables", "~]# systemctl enable iptables ~]# systemctl enable ip6tables", "~]# iptables -A INPUT -s 10.0.0.0/8 -j DROP ~]# iptables -A INPUT -s 172.16.0.0/12 -j DROP ~]# iptables -A INPUT -s 192.168.0.0/16 -j DROP", "~]# ipset create my-block-set hash:net ~]# ipset add my-block-set 10.0.0.0/8 ~]# ipset add my-block-set 172.16.0.0/12 ~]# ipset add my-block-set 192.168.0.0/16", "~]# iptables -A INPUT -m set --set my-block-set src -j DROP" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-setting_and_controlling_ip_sets_using_iptables
Chapter 8. Message delivery
Chapter 8. Message delivery 8.1. Handling unacknowledged deliveries Messaging systems use message acknowledgment to track if the goal of sending a message is truly accomplished. When a message is sent, there is a period of time after the message is sent and before it is acknowledged (the message is "in flight"). If the network connection is lost during that time, the status of the message delivery is unknown, and the delivery might require special handling in application code to ensure its completion. The sections below describe the conditions for message delivery when connections fail. Non-transacted producer with an unacknowledged delivery If a message is in flight, it is sent again after reconnect, provided a send timeout is not set and has not elapsed. No user action is required. Transacted producer with an uncommitted transaction If a message is in flight, it is sent again after reconnect. If the send is the first in a new transaction, then sending continues as normal after reconnect. If there are sends in the transaction, then the transaction is considered failed, and any subsequent commit operation throws a TransactionRolledBackException . To ensure delivery, the user must resend any messages belonging to a failed transaction. Transacted producer with a pending commit If a commit is in flight, then the transaction is considered failed, and any subsequent commit operation throws a TransactionRolledBackException . To ensure delivery, the user must resend any messages belonging to a failed transaction. Non-transacted consumer with an unacknowledged delivery If a message is received but not yet acknowledged, then acknowledging the message produces no error but results in no action by the client. Because the received message is not acknowledged, the producer might resend it. To avoid duplicates, the user must filter out duplicate messages by message ID. Transacted consumer with an uncommitted transaction If an active transaction is not yet committed, it is considered failed, and any pending acknowledgments are dropped. Any subsequent commit operation throws a TransactionRolledBackException . The producer might resend the messages belonging to the transaction. To avoid duplicates, the user must filter out duplicate messages by message ID. Transacted consumer with a pending commit If a commit is in flight, then the transaction is considered failed. Any subsequent commit operation throws a TransactionRolledBackException . The producer might resend the messages belonging to the transaction. To avoid duplicates, the user must filter out duplicate messages by message ID. 8.2. Extended session acknowledgment modes The client supports two additional session acknowledgement modes beyond those defined in the JMS specification. Individual acknowledge In this mode, messages must be acknowledged individually by the application using the Message.acknowledge() method used when the session is in CLIENT_ACKNOWLEDGE mode. Unlike with CLIENT_ACKNOWLEDGE mode, only the target message is acknowledged. All other delivered messages remain unacknowledged. The integer value used to activate this mode is 101. connection.createSession(false, 101); No acknowledge In this mode, messages are accepted at the server before being dispatched to the client, and no acknowledgment is performed by the client. The client supports two integer values to activate this mode, 100 and 257. connection.createSession(false, 100);
[ "connection.createSession(false, 101);", "connection.createSession(false, 100);" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_jms_client/message_delivery
12.8. Using an NPIV Virtual Adapter (vHBA) with SCSI Devices
12.8. Using an NPIV Virtual Adapter (vHBA) with SCSI Devices NPIV (N_Port ID Virtualization) is a software technology that allows sharing of a single physical Fibre Channel host bus adapter (HBA). This allows multiple guests to see the same storage from multiple physical hosts, and thus allows for easier migration paths for the storage. As a result, there is no need for the migration to create or copy storage, as long as the correct storage path is specified. In virtualization, the virtual host bus adapter , or vHBA , controls the LUNs for virtual machines. Each vHBA is identified by its own WWNN (World Wide Node Name) and WWPN (World Wide Port Name). The path to the storage is determined by the WWNN and WWPN values. This section provides instructions for configuring a vHBA on a virtual machine. Note that Red Hat Enterprise Linux 6 does not support persistent vHBA configuration across host reboots; verify any vHBA-related settings following a host reboot. 12.8.1. Creating a vHBA Procedure 12.6. Creating a vHBA Locate HBAs on the host system To locate the HBAs on your host system, examine the SCSI devices on the host system to locate a scsi_host with vport capability. Run the following command to retrieve a scsi_host list: For each scsi_host , run the following command to examine the device XML for the line <capability type='vport_ops'> , which indicates a scsi_host with vport capability. Check the HBA's details Use the virsh nodedev-dumpxml HBA_device command to see the HBA's details. The XML output from the virsh nodedev-dumpxml command will list the fields <name> , <wwnn> , and <wwpn> , which are used to create a vHBA. The <max_vports> value shows the maximum number of supported vHBAs. # virsh nodedev-dumpxml scsi_host3 <device> <name>scsi_host3</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3</path> <parent>pci_0000_10_00_0</parent> <capability type='scsi_host'> <host>3</host> <capability type='fc_host'> <wwnn>20000000c9848140</wwnn> <wwpn>10000000c9848140</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> <capability type='vport_ops'> <max_vports>127</max_vports> <vports>0</vports> </capability> </capability> </device> In this example, the <max_vports> value shows there are a total 127 virtual ports available for use in the HBA configuration. The <vports> value shows the number of virtual ports currently being used. These values update after creating a vHBA. Create a vHBA host device Create an XML file similar to the following (in this example, named vhba_host3.xml ) for the vHBA host. # cat vhba_host3.xml <device> <parent>scsi_host3</parent> <capability type='scsi_host'> <capability type='fc_host'> </capability> </capability> </device> The <parent> field specifies the HBA device to associate with this vHBA device. The details in the <device> tag are used in the step to create a new vHBA device for the host. See http://libvirt.org/formatnode.html for more information on the nodedev XML format. Create a new vHBA on the vHBA host device To create a vHBA on vhba_host3 , use the virsh nodedev-create command: Verify the vHBA Verify the new vHBA's details ( scsi_host5 ) with the virsh nodedev-dumpxml command: # virsh nodedev-dumpxml scsi_host5 <device> <name>scsi_host5</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3/vport-3:0-0/host5</path> <parent>scsi_host3</parent> <capability type='scsi_host'> <host>5</host> <capability type='fc_host'> <wwnn>5001a4a93526d0a1</wwnn> <wwpn>5001a4ace3ee047d</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> </capability> </device>
[ "virsh nodedev-list --cap scsi_host scsi_host0 scsi_host1 scsi_host2 scsi_host3 scsi_host4", "virsh nodedev-dumpxml scsi_hostN", "virsh nodedev-dumpxml scsi_host3 <device> <name>scsi_host3</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3</path> <parent>pci_0000_10_00_0</parent> <capability type='scsi_host'> <host>3</host> <capability type='fc_host'> <wwnn>20000000c9848140</wwnn> <wwpn>10000000c9848140</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> <capability type='vport_ops'> <max_vports>127</max_vports> <vports>0</vports> </capability> </capability> </device>", "cat vhba_host3.xml <device> <parent>scsi_host3</parent> <capability type='scsi_host'> <capability type='fc_host'> </capability> </capability> </device>", "virsh nodedev-create vhba_host3.xml Node device scsi_host5 created from vhba_host3.xml", "virsh nodedev-dumpxml scsi_host5 <device> <name>scsi_host5</name> <path>/sys/devices/pci0000:00/0000:00:04.0/0000:10:00.0/host3/vport-3:0-0/host5</path> <parent>scsi_host3</parent> <capability type='scsi_host'> <host>5</host> <capability type='fc_host'> <wwnn>5001a4a93526d0a1</wwnn> <wwpn>5001a4ace3ee047d</wwpn> <fabric_wwn>2002000573de9a81</fabric_wwn> </capability> </capability> </device>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-npiv_storage
Part V. Part V: References
Part V. Part V: References
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide_common_criteria_edition/part_v_references
6.11. Cluster Resources Cleanup
6.11. Cluster Resources Cleanup If a resource has failed, a failure message appears when you display the cluster status. If you resolve that resource, you can clear that failure status with the pcs resource cleanup command. This command resets the resource status and failcount , telling the cluster to forget the operation history of a resource and re-detect its current state. The following command cleans up the resource specified by resource_id . If you do not specify a resource_id , this command resets the resource status and failcount for all resources. As of Red Hat Enterprise Linux 7.5, the pcs resource cleanup command probes only the resources that display as a failed action. To probe all resources on all nodes you can enter the following command: By default, the pcs resource refresh command probes only the nodes where a resource's state is known. To probe all resources even if the state is not known, enter the following command:
[ "pcs resource cleanup resource_id", "pcs resource refresh", "pcs resource refresh --full" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-resource_cleanup-HAAR
Chapter 2. Clair concepts
Chapter 2. Clair concepts The following sections provide a conceptual overview of how Clair works. 2.1. Clair in practice A Clair analysis is broken down into three distinct parts: indexing, matching, and notification. 2.1.1. Indexing Clair's indexer service plays a crucial role in understanding the makeup of a container image. In Clair, container image representations called "manifests." Manifests are used to comprehend the contents of the image's layers. To streamline this process, Clair takes advantage of the fact that Open Container Initiative (OCI) manifests and layers are designed for content addressing, reducing repetitive tasks. During indexing, a manifest that represents a container image is taken and broken down into its essential components. The indexer's job is to uncover the image's contained packages, its origin distribution, and the package repositories it relies on. This valuable information is then recorded and stored within Clair's database. The insights gathered during indexing serve as the basis for generating a comprehensive vulnerability report. This report can be seamlessly transferred to a matcher node for further analysis and action, helping users make informed decisions about their container images' security. 2.1.2. Matching With Clair, a matcher node is responsible for matching vulnerabilities to a provided index report. Matchers are responsible for keeping the database of vulnerabilities up to date. Matchers run a set of updaters, which periodically probe their data sources for new content. New vulnerabilities are stored in the database when they are discovered. The matcher API is designed to be used often. It is designed to always provide the most recent vulnerability report when queried. The vulnerability report summarizes both a manifest's content and any vulnerabilities affecting the content. 2.1.3. Notifications Clair uses a notifier service that keeps track of new security database updates and informs users if new or removed vulnerabilities affect an indexed manifest. When the notifier becomes aware of new vulnerabilities affecting a previously indexed manifest, it uses the configured methods in your config.yaml file to issue notifications about the new changes. Returned notifications express the most severe vulnerability discovered because of the change. This avoids creating excessive notifications for the same security database update. When a user receives a notification, it issues a new request against the matcher to receive an up to date vulnerability report. You can subscribe to notifications through the following mechanics: Webhook delivery AMQP delivery STOMP delivery Configuring the notifier is done through the Clair YAML configuration file. 2.2. Clair authentication In its current iteration, Clair v4 (Clair) handles authentication internally. Note versions of Clair used JWT Proxy to gate authentication. Authentication is configured by specifying configuration objects underneath the auth key of the configuration. Multiple authentication configurations might be present, but they are used preferentially in the following order: PSK. With this authentication configuration, Clair implements JWT-based authentication using a pre-shared key. Configuration. For example: auth: psk: key: >- MDQ4ODBlNDAtNDc0ZC00MWUxLThhMzAtOTk0MzEwMGQwYTMxCg== iss: 'issuer' In this configuration the auth field requires two parameters: iss , which is the issuer to validate all incoming requests, and key , which is a base64 coded symmetric key for validating the requests. 2.3. Clair updaters Clair uses Go packages called updaters that contain the logic of fetching and parsing different vulnerability databases. Updaters are usually paired with a matcher to interpret if, and how, any vulnerability is related to a package. Administrators might want to update the vulnerability database less frequently, or not import vulnerabilities from databases that they know will not be used. 2.4. Information about Clair updaters The following table provides details about each Clair updater, including the configuration parameter, a brief description, relevant URLs, and the associated components that they interact with. This list is not exhaustive, and some servers might issue redirects, while certain request URLs are dynamically constructed to ensure accurate vulnerability data retrieval. For Clair, each updater is responsible for fetching and parsing vulnerability data related to a specific package type or distribution. For example, the Debian updater focuses on Debian-based Linux distributions, while the AWS updater focuses on vulnerabilities specific to Amazon Web Services' Linux distributions. Understanding the package type is important for vulnerability management because different package types might have unique security concerns and require specific updates and patches. Note If you are using a proxy server in your environment with Clair's updater URLs, you must identify which URL needs to be added to the proxy allowlist to ensure that Clair can access them unimpeded. Use the following table to add updater URLs to your proxy allowlist. Table 2.1. Clair updater information Updater Description URLs Component alpine The Alpine updater is responsible for fetching and parsing vulnerability data related to packages in Alpine Linux distributions. https://secdb.alpinelinux.org/ Alpine Linux SecDB database aws The AWS updater is focused on AWS Linux-based packages, ensuring that vulnerability information specific to Amazon Web Services' custom Linux distributions is kept up-to-date. http://repo.us-west-2.amazonaws.com/2018.03/updates/x86_64/mirror.list https://cdn.amazonlinux.com/2/core/latest/x86_64/mirror.list https://cdn.amazonlinux.com/al2023/core/mirrors/latest/x86_64/mirror.list Amazon Web Services (AWS) UpdateInfo debian The Debian updater is essential for tracking vulnerabilities in packages associated with Debian-based Linux distributions. https://deb.debian.org/ https://security-tracker.debian.org/tracker/data/json Debian Security Tracker clair.cvss The Clair Common Vulnerability Scoring System (CVSS) updater focuses on maintaining data about vulnerabilities and their associated CVSS scores. This is not tied to a specific package type but rather to the severity and risk assessment of vulnerabilities in general. https://nvd.nist.gov/feeds/json/cve/1.1/ National Vulnerability Database (NVD) feed for Common Vulnerabilities and Exposures (CVE) data in JSON format oracle The Oracle updater is dedicated to Oracle Linux packages, maintaining data on vulnerabilities that affect Oracle Linux systems. https://linux.oracle.com/security/oval/com.oracle.elsa-*.xml.bz2 Oracle Oval database photon The Photon updater deals with packages in VMware Photon OS. https://packages.vmware.com/photon/photon_oval_definitions/ VMware Photon OS oval definitions rhel The Red Hat Enterprise Linux (RHEL) updater is responsible for maintaining vulnerability data for packages in Red Hat's Enterprise Linux distribution. https://access.redhat.com/security/cve/ https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST Red Hat Enterprise Linux (RHEL) Oval database rhcc The Red Hat Container Catalog (RHCC) updater is connected to Red Hat's container images. This updater ensures that vulnerability information related to Red Hat's containerized software is kept current. https://access.redhat.com/security/data/metrics/cvemap.xml Resource Handler Configuration Controller (RHCC) database suse The SUSE updater manages vulnerability information for packages in the SUSE Linux distribution family, including openSUSE, SUSE Enterprise Linux, and others. https://support.novell.com/security/oval/ SUSE Oval database ubuntu The Ubuntu updater is dedicated to tracking vulnerabilities in packages associated with Ubuntu-based Linux distributions. Ubuntu is a popular distribution in the Linux ecosystem. https://security-metadata.canonical.com/oval/com.ubuntu.*.cve.oval.xml https://api.launchpad.net/1.0/ Ubuntu Oval Database osv The Open Source Vulnerability (OSV) updater specializes in tracking vulnerabilities within open source software components. OSV is a critical resource that provides detailed information about security issues found in various open source projects. https://osv-vulnerabilities.storage.googleapis.com/ Open Source Vulnerabilities database 2.5. Configuring updaters Updaters can be configured by the updaters.sets key in your clair-config.yaml file. Important If the sets field is not populated, it defaults to using all sets. In using all sets, Clair tries to reach the URL or URLs of each updater. If you are using a proxy environment, you must add these URLs to your proxy allowlist. If updaters are being run automatically within the matcher process, which is the default setting, the period for running updaters is configured under the matcher's configuration field. 2.5.1. Selecting specific updater sets Use the following references to select one, or multiple, updaters for your Red Hat Quay deployment. Configuring Clair for multiple updaters Multiple specific updaters #... updaters: sets: - alpine - aws - osv #... Configuring Clair for Alpine Alpine config.yaml example #... updaters: sets: - alpine #... Configuring Clair for AWS AWS config.yaml example #... updaters: sets: - aws #... Configuring Clair for Debian Debian config.yaml example #... updaters: sets: - debian #... Configuring Clair for Clair CVSS Clair CVSS config.yaml example #... updaters: sets: - clair.cvss #... Configuring Clair for Oracle Oracle config.yaml example #... updaters: sets: - oracle #... Configuring Clair for Photon Photon config.yaml example #... updaters: sets: - photon #... Configuring Clair for SUSE SUSE config.yaml example #... updaters: sets: - suse #... Configuring Clair for Ubuntu Ubuntu config.yaml example #... updaters: sets: - ubuntu #... Configuring Clair for OSV OSV config.yaml example #... updaters: sets: - osv #... 2.5.2. Selecting updater sets for full Red Hat Enterprise Linux (RHEL) coverage For full coverage of vulnerabilities in Red Hat Enterprise Linux (RHEL), you must use the following updater sets: rhel . This updater ensures that you have the latest information on the vulnerabilities that affect RHEL. rhcc . This updater keeps track of vulnerabilities related to Red hat's container images. clair.cvss . This updater offers a comprehensive view of the severity and risk assessment of vulnerabilities by providing Common Vulnerabilities and Exposures (CVE) scores. osv . This updater focuses on tracking vulnerabilities in open-source software components. This updater is recommended due to how common the use of Java and Go are in RHEL products. RHEL updaters example #... updaters: sets: - rhel - rhcc - clair.cvss - osv #... 2.5.3. Advanced updater configuration In some cases, users might want to configure updaters for specific behavior, for example, if you want to allowlist specific ecosystems for the Open Source Vulnerabilities (OSV) updaters. Advanced updater configuration might be useful for proxy deployments or air gapped deployments. Configuration for specific updaters in these scenarios can be passed by putting a key underneath the config environment variable of the updaters object. Users should examine their Clair logs to double-check names. The following YAML snippets detail the various settings available to some Clair updater Important For more users, advanced updater configuration is unnecessary. Configuring the alpine updater #... updaters: sets: - apline config: alpine: url: https://secdb.alpinelinux.org/ #... Configuring the debian updater #... updaters: sets: - debian config: debian: mirror_url: https://deb.debian.org/ json_url: https://security-tracker.debian.org/tracker/data/json #... Configuring the clair.cvss updater #... updaters: config: clair.cvss: url: https://nvd.nist.gov/feeds/json/cve/1.1/ #... Configuring the oracle updater #... updaters: sets: - oracle config: oracle-2023-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2023.xml.bz2 oracle-2022-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2022.xml.bz2 #... Configuring the photon updater #... updaters: sets: - photon config: photon: url: https://packages.vmware.com/photon/photon_oval_definitions/ #... Configuring the rhel updater #... updaters: sets: - rhel config: rhel: url: https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST ignore_unpatched: true 1 #... 1 Boolean. Whether to include information about vulnerabilities that do not have corresponding patches or updates available. Configuring the rhcc updater #... updaters: sets: - rhcc config: rhcc: url: https://access.redhat.com/security/data/metrics/cvemap.xml #... Configuring the suse updater #... updaters: sets: - suse config: suse: url: https://support.novell.com/security/oval/ #... Configuring the ubuntu updater #... updaters: config: ubuntu: url: https://api.launchpad.net/1.0/ name: ubuntu force: 1 - name: focal 2 version: 20.04 3 #... 1 Used to force the inclusion of specific distribution and version details in the resulting UpdaterSet, regardless of their status in the API response. Useful when you want to ensure that particular distributions and versions are consistently included in your updater configuration. 2 Specifies the distribution name that you want to force to be included in the UpdaterSet. 3 Specifies the version of the distribution you want to force into the UpdaterSet. Configuring the osv updater #... updaters: sets: - osv config: osv: url: https://osv-vulnerabilities.storage.googleapis.com/ allowlist: 1 - npm - pypi #... 1 The list of ecosystems to allow. When left unset, all ecosystems are allowed. Must be lowercase. For a list of supported ecosystems, see the documentation for defined ecosystems . 2.5.4. Disabling the Clair Updater component In some scenarios, users might want to disable the Clair updater component. Disabling updaters is required when running Red Hat Quay in a disconnected environment. In the following example, Clair updaters are disabled: #... matcher: disable_updaters: true #... 2.6. CVE ratings from the National Vulnerability Database As of Clair v4.2, Common Vulnerability Scoring System (CVSS) enrichment data is now viewable in the Red Hat Quay UI. Additionally, Clair v4.2 adds CVSS scores from the National Vulnerability Database for detected vulnerabilities. With this change, if the vulnerability has a CVSS score that is within 2 levels of the distribution score, the Red Hat Quay UI present's the distribution's score by default. For example: This differs from the interface, which would only display the following information: 2.7. Federal Information Processing Standard (FIPS) readiness and compliance The Federal Information Processing Standard (FIPS) developed by the National Institute of Standards and Technology (NIST) is regarded as the highly regarded for securing and encrypting sensitive data, notably in highly regulated areas such as banking, healthcare, and the public sector. Red Hat Enterprise Linux (RHEL) and OpenShift Container Platform support FIPS by providing a FIPS mode , in which the system only allows usage of specific FIPS-validated cryptographic modules like openssl . This ensures FIPS compliance. 2.7.1. Enabling FIPS compliance Use the following procedure to enable FIPS compliance on your Red Hat Quay deployment. Prerequisite If you are running a standalone deployment of Red Hat Quay, your Red Hat Enterprise Linux (RHEL) deployment is version 8 or later and FIPS-enabled. If you are using the Red Hat Quay Operator, OpenShift Container Platform is version 4.10 or later. Your Red Hat Quay version is 3.5.0 or later. You have administrative privileges for your Red Hat Quay deployment. Procedure In your Red Hat Quay config.yaml file, set the FEATURE_FIPS configuration field to true . For example: --- FEATURE_FIPS = true --- With FEATURE_FIPS set to true , Red Hat Quay runs using FIPS-compliant hash functions.
[ "auth: psk: key: >- MDQ4ODBlNDAtNDc0ZC00MWUxLThhMzAtOTk0MzEwMGQwYTMxCg== iss: 'issuer'", "# updaters: sets: - alpine - aws - osv #", "# updaters: sets: - alpine #", "# updaters: sets: - aws #", "# updaters: sets: - debian #", "# updaters: sets: - clair.cvss #", "# updaters: sets: - oracle #", "# updaters: sets: - photon #", "# updaters: sets: - suse #", "# updaters: sets: - ubuntu #", "# updaters: sets: - osv #", "# updaters: sets: - rhel - rhcc - clair.cvss - osv #", "# updaters: sets: - apline config: alpine: url: https://secdb.alpinelinux.org/ #", "# updaters: sets: - debian config: debian: mirror_url: https://deb.debian.org/ json_url: https://security-tracker.debian.org/tracker/data/json #", "# updaters: config: clair.cvss: url: https://nvd.nist.gov/feeds/json/cve/1.1/ #", "# updaters: sets: - oracle config: oracle-2023-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2023.xml.bz2 oracle-2022-updater: url: - https://linux.oracle.com/security/oval/com.oracle.elsa-2022.xml.bz2 #", "# updaters: sets: - photon config: photon: url: https://packages.vmware.com/photon/photon_oval_definitions/ #", "# updaters: sets: - rhel config: rhel: url: https://access.redhat.com/security/data/oval/v2/PULP_MANIFEST ignore_unpatched: true 1 #", "# updaters: sets: - rhcc config: rhcc: url: https://access.redhat.com/security/data/metrics/cvemap.xml #", "# updaters: sets: - suse config: suse: url: https://support.novell.com/security/oval/ #", "# updaters: config: ubuntu: url: https://api.launchpad.net/1.0/ name: ubuntu force: 1 - name: focal 2 version: 20.04 3 #", "# updaters: sets: - osv config: osv: url: https://osv-vulnerabilities.storage.googleapis.com/ allowlist: 1 - npm - pypi #", "# matcher: disable_updaters: true #", "--- FEATURE_FIPS = true ---" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-concepts
Chapter 5. Clair security scanner
Chapter 5. Clair security scanner 5.1. Clair configuration overview Clair is configured by a structured YAML file. Each Clair node needs to specify what mode it will run in and a path to a configuration file through CLI flags or environment variables. For example: USD clair -conf ./path/to/config.yaml -mode indexer or USD clair -conf ./path/to/config.yaml -mode matcher The aforementioned commands each start two Clair nodes using the same configuration file. One runs the indexing facilities, while other runs the matching facilities. If you are running Clair in combo mode, you must supply the indexer, matcher, and notifier configuration blocks in the configuration. 5.1.1. Information about using Clair in a proxy environment Environment variables respected by the Go standard library can be specified if needed, for example: HTTP_PROXY USD export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port> HTTPS_PROXY . USD export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port> SSL_CERT_DIR USD export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates> NO_PROXY USD export NO_PROXY=<comma_separated_list_of_hosts_and_domains> If you are using a proxy server in your environment with Clair's updater URLs, you must identify which URL needs to be added to the proxy allowlist to ensure that Clair can access them unimpeded. For example, the osv updater requires access to https://osv-vulnerabilities.storage.googleapis.com to fetch ecosystem data dumps. In this scenario, the URL must be added to the proxy allowlist. For a full list of updater URLs, see "Clair updater URLs". You must also ensure that the standard Clair URLs are added to the proxy allowlist: https://search.maven.org/solrsearch/select https://catalog.redhat.com/api/containers/ https://access.redhat.com/security/data/metrics/repository-to-cpe.json https://access.redhat.com/security/data/metrics/container-name-repos-map.json When configuring the proxy server, take into account any authentication requirements or specific proxy settings needed to enable seamless communication between Clair and these URLs. By thoroughly documenting and addressing these considerations, you can ensure that Clair functions effectively while routing its updater traffic through the proxy. 5.1.2. Clair configuration reference The following YAML shows an example Clair configuration: http_listen_addr: "" introspection_addr: "" log_level: "" tls: {} indexer: connstring: "" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: "" indexer_addr: "" migrations: false period: "" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: "" migrations: false indexer_addr: "" matcher_addr: "" poll_interval: "" delivery_interval: "" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: "" probability: null jaeger: agent: endpoint: "" collector: endpoint: "" username: null password: null service_name: "" tags: nil buffer_max: 0 metrics: name: "" prometheus: endpoint: null dogstatsd: url: "" Note The above YAML file lists every key for completeness. Using this configuration file as-is will result in some options not having their defaults set normally. 5.1.3. Clair general fields The following table describes the general configuration fields available for a Clair deployment. Field Typhttp_listen_ae Description http_listen_addr String Configures where the HTTP API is exposed. Default: :6060 introspection_addr String Configures where Clair's metrics and health endpoints are exposed. log_level String Sets the logging level. Requires one of the following strings: debug-color , debug , info , warn , error , fatal , panic tls String A map containing the configuration for serving the HTTP API of TLS/SSL and HTTP/2. .cert String The TLS certificate to be used. Must be a full-chain certificate. Example configuration for general Clair fields The following example shows a Clair configuration. Example configuration for general Clair fields # ... http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info # ... 5.1.4. Clair indexer configuration fields The following table describes the configuration fields for Clair's indexer component. Field Type Description indexer Object Provides Clair indexer node configuration. .airgap Boolean Disables HTTP access to the internet for indexers and fetchers. Private IPv4 and IPv6 addresses are allowed. Database connections are unaffected. .connstring String A Postgres connection string. Accepts format as a URL or libpq connection string. .index_report_request_concurrency Integer Rate limits the number of index report creation requests. Setting this to 0 attemps to auto-size this value. Setting a negative value means unlimited. The auto-sizing is a multiple of the number of available cores. The API returns a 429 status code if concurrency is exceeded. .scanlock_retry Integer A positive integer representing seconds. Concurrent indexers lock on manifest scans to avoid clobbering. This value tunes how often a waiting indexer polls for the lock. .layer_scan_concurrency Integer Positive integer limiting the number of concurrent layer scans. Indexers will match a manifest's layer concurrently. This value tunes the number of layers an indexer scans in parallel. .migrations Boolean Whether indexer nodes handle migrations to their database. .scanner String Indexer configuration. Scanner allows for passing configuration options to layer scanners. The scanner will have this configuration pass to it on construction if designed to do so. .scanner.dist String A map with the name of a particular scanner and arbitrary YAML as a value. .scanner.package String A map with the name of a particular scanner and arbitrary YAML as a value. .scanner.repo String A map with the name of a particular scanner and arbitrary YAML as a value. Example indexer configuration The following example shows a hypothetical indexer configuration for Clair. Example indexer configuration # ... indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true # ... 5.1.5. Clair matcher configuration fields The following table describes the configuration fields for Clair's matcher component. Note Differs from matchers configuration fields. Field Type Description matcher Object Provides Clair matcher node configuration. .cache_age String Controls how long users should be hinted to cache responses for. .connstring String A Postgres connection string. Accepts format as a URL or libpq connection string. .max_conn_pool Integer Limits the database connection pool size. Clair allows for a custom connection pool size. This number directly sets how many active database connections are allowed concurrently. This parameter will be ignored in a future version. Users should configure this through the connection string. .indexer_addr String A matcher contacts an indexer to create a vulnerability report. The location of this indexer is required. Defaults to 30m . .migrations Boolean Whether matcher nodes handle migrations to their databases. .period String Determines how often updates for new security advisories take place. Defaults to 6h . .disable_updaters Boolean Whether to run background updates or not. Default: False .update_retention Integer Sets the number of update operations to retain between garbage collection cycles. This should be set to a safe MAX value based on database size constraints. Defaults to 10m . If a value of less than 0 is provided, garbage collection is disabled. 2 is the minimum value to ensure updates can be compared to notifications. Example matcher configuration Example matcher configuration # ... matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2 # ... 5.1.6. Clair matchers configuration fields The following table describes the configuration fields for Clair's matchers component. Note Differs from matcher configuration fields. Table 5.1. Matchers configuration fields Field Type Description matchers Array of strings Provides configuration for the in-tree matchers . .names String A list of string values informing the matcher factory about enabled matchers. If value is set to null , the default list of matchers run. The following strings are accepted: alpine-matcher , aws-matcher , debian-matcher , gobin , java-maven , oracle , photon , python , rhel , rhel-container-matcher , ruby , suse , ubuntu-matcher .config String Provides configuration to a specific matcher. A map keyed by the name of the matcher containing a sub-object which will be provided to the matchers factory constructor. For example: Example matchers configuration The following example shows a hypothetical Clair deployment that only requires only the alpine , aws , debian , oracle matchers. Example matchers configuration # ... matchers: names: - "alpine-matcher" - "aws" - "debian" - "oracle" # ... 5.1.7. Clair updaters configuration fields The following table describes the configuration fields for Clair's updaters component. Table 5.2. Updaters configuration fields Field Type Description updaters Object Provides configuration for the matcher's update manager. .sets String A list of values informing the update manager which updaters to run. If value is set to null , the default set of updaters runs the following: alpine , aws , clair.cvss , debian , oracle , photon , osv , rhel , rhcc suse , ubuntu If left blank, zero updaters run. .config String Provides configuration to specific updater sets. A map keyed by the name of the updater set containing a sub-object which will be provided to the updater set's constructor. For a list of the sub-objects for each updater, see "Advanced updater configuration". Example updaters configuration In the following configuration, only the rhel set is configured. The ignore_unpatched variable, which is specific to the rhel updater, is also defined. Example updaters configuration # ... updaters: sets: - rhel config: rhel: ignore_unpatched: false # ... 5.1.8. Clair notifier configuration fields The general notifier configuration fields for Clair are listed below. Field Type Description notifier Object Provides Clair notifier node configuration. .connstring String Postgres connection string. Accepts format as URL, or libpq connection string. .migrations Boolean Whether notifier nodes handle migrations to their database. .indexer_addr String A notifier contacts an indexer to create or obtain manifests affected by vulnerabilities. The location of this indexer is required. .matcher_addr String A notifier contacts a matcher to list update operations and acquire diffs. The location of this matcher is required. .poll_interval String The frequency at which the notifier will query a matcher for update operations. .delivery_interval String The frequency at which the notifier attempts delivery of created, or previously failed, notifications. .disable_summary Boolean Controls whether notifications should be summarized to one per manifest. Example notifier configuration The following notifier snippet is for a minimal configuration. Example notifier configuration # ... notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: "http://webhook/" callback: "http://clair-notifier/notifier/api/v1/notifications" headers: "" amqp: null stomp: null # ... 5.1.8.1. Clair webhook configuration fields The following webhook fields are available for the Clair notifier environment. Table 5.3. Clair webhook fields .webhook Object Configures the notifier for webhook delivery. .webhook.target String URL where the webhook will be delivered. .webhook.callback String The callback URL where notifications can be retrieved. The notification ID will be appended to this URL. This will typically be where the Clair notifier is hosted. .webhook.headers String A map associating a header name to a list of values. Example webhook configuration Example webhook configuration # ... notifier: # ... webhook: target: "http://webhook/" callback: "http://clair-notifier/notifier/api/v1/notifications" # ... 5.1.8.2. Clair amqp configuration fields The following Advanced Message Queuing Protocol (AMQP) fields are available for the Clair notifier environment. .amqp Object Configures the notifier for AMQP delivery. [NOTE] ==== Clair does not declare any AMQP components on its own. All attempts to use an exchange or queue are passive only and will fail. Broker administrators should setup exchanges and queues ahead of time. ==== .amqp.direct Boolean If true , the notifier will deliver individual notifications (not a callback) to the configured AMQP broker. .amqp.rollup Integer When amqp.direct is set to true , this value informs the notifier of how many notifications to send in a direct delivery. For example, if direct is set to true , and amqp.rollup is set to 5 , the notifier delivers no more than 5 notifications in a single JSON payload to the broker. Setting the value to 0 effectively sets it to 1 . .amqp.exchange Object The AMQP exchange to connect to. .amqp.exchange.name String The name of the exchange to connect to. .amqp.exchange.type String The type of the exchange. Typically one of the following: direct , fanout , topic , headers . .amqp.exchange.durability Boolean Whether the configured queue is durable. .amqp.exchange.auto_delete Boolean Whether the configured queue uses an auto_delete_policy . .amqp.routing_key String The name of the routing key each notification is sent with. .amqp.callback String If amqp.direct is set to false , this URL is provided in the notification callback sent to the broker. This URL should point to Clair's notification API endpoint. .amqp.uris String A list of one or more AMQP brokers to connect to, in priority order. .amqp.tls Object Configures TLS/SSL connection to an AMQP broker. .amqp.tls.root_ca String The filesystem path where a root CA can be read. .amqp.tls.cert String The filesystem path where a TLS/SSL certificate can be read. [NOTE] ==== Clair also allows SSL_CERT_DIR , as documented for the Go crypto/x509 package. ==== .amqp.tls.key String The filesystem path where a TLS/SSL private key can be read. Example AMQP configuration The following example shows a hypothetical AMQP configuration for Clair. Example AMQP configuration # ... notifier: # ... amqp: exchange: name: "" type: "direct" durable: true auto_delete: false uris: ["amqp://user:pass@host:10000/vhost"] direct: false routing_key: "notifications" callback: "http://clair-notifier/notifier/api/v1/notifications" tls: root_ca: "optional/path/to/rootca" cert: "madatory/path/to/cert" key: "madatory/path/to/key" # ... 5.1.8.3. Clair STOMP configuration fields The following Simple Text Oriented Message Protocol (STOMP) fields are available for the Clair notifier environment. .stomp Object Configures the notifier for STOMP delivery. .stomp.direct Boolean If true , the notifier delivers individual notifications (not a callback) to the configured STOMP broker. .stomp.rollup Integer If stomp.direct is set to true , this value limits the number of notifications sent in a single direct delivery. For example, if direct is set to true , and rollup is set to 5 , the notifier delivers no more than 5 notifications in a single JSON payload to the broker. Setting the value to 0 effectively sets it to 1 . .stomp.callback String If stomp.callback is set to false , the provided URL in the notification callback is sent to the broker. This URL should point to Clair's notification API endpoint. .stomp.destination String The STOMP destination to deliver notifications to. .stomp.uris String A list of one or more STOMP brokers to connect to in priority order. .stomp.tls Object Configured TLS/SSL connection to STOMP broker. .stomp.tls.root_ca String The filesystem path where a root CA can be read. [NOTE] ==== Clair also respects SSL_CERT_DIR , as documented for the Go crypto/x509 package. ==== .stomp.tls.cert String The filesystem path where a TLS/SSL certificate can be read. .stomp.tls.key String The filesystem path where a TLS/SSL private key can be read. .stomp.user String Configures login details for the STOMP broker. .stomp.user.login String The STOMP login to connect with. .stomp.user.passcode String The STOMP passcode to connect with. Example STOMP configuration The following example shows a hypothetical STOMP configuration for Clair. Example STOMP configuration # ... notifier: # ... stomp: desitnation: "notifications" direct: false callback: "http://clair-notifier/notifier/api/v1/notifications" login: login: "username" passcode: "passcode" tls: root_ca: "optional/path/to/rootca" cert: "madatory/path/to/cert" key: "madatory/path/to/key" # ... 5.1.9. Clair authorization configuration fields The following authorization configuration fields are available for Clair. Field Type Description auth Object Defines Clair's external and intra-service JWT based authentication. If multiple auth mechanisms are defined, Clair picks one. Currently, multiple mechanisms are unsupported. .psk String Defines pre-shared key authentication. .psk.key String A shared base64 encoded key distributed between all parties signing and verifying JWTs. .psk.iss String A list of JWT issuers to verify. An empty list accepts any issuer in a JWT claim. Example authorization configuration The following authorization snippet is for a minimal configuration. Example authorization configuration # ... auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: ["quay"] # ... 5.1.10. Clair trace configuration fields The following trace configuration fields are available for Clair. Field Type Description trace Object Defines distributed tracing configuration based on OpenTelemetry. .name String The name of the application traces will belong to. .probability Integer The probability a trace will occur. .jaeger Object Defines values for Jaeger tracing. .jaeger.agent Object Defines values for configuring delivery to a Jaeger agent. .jaeger.agent.endpoint String An address in the <host>:<post> syntax where traces can be submitted. .jaeger.collector Object Defines values for configuring delivery to a Jaeger collector. .jaeger.collector.endpoint String An address in the <host>:<post> syntax where traces can be submitted. .jaeger.collector.username String A Jaeger username. .jaeger.collector.password String A Jaeger password. .jaeger.service_name String The service name registered in Jaeger. .jaeger.tags String Key-value pairs to provide additional metadata. .jaeger.buffer_max Integer The maximum number of spans that can be buffered in memory before they are sent to the Jaeger backend for storage and analysis. Example trace configuration The following example shows a hypothetical trace configuration for Clair. Example trace configuration # ... trace: name: "jaeger" probability: 1 jaeger: agent: endpoint: "localhost:6831" service_name: "clair" # ... 5.1.11. Clair metrics configuration fields The following metrics configuration fields are available for Clair. Field Type Description metrics Object Defines distributed tracing configuration based on OpenTelemetry. .name String The name of the metrics in use. .prometheus String Configuration for a Prometheus metrics exporter. .prometheus.endpoint String Defines the path where metrics are served. Example metrics configuration The following example shows a hypothetical metrics configuration for Clair. Example metrics configuration # ... metrics: name: "prometheus" prometheus: endpoint: "/metricsz" # ...
[ "clair -conf ./path/to/config.yaml -mode indexer", "clair -conf ./path/to/config.yaml -mode matcher", "export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port>", "export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port>", "export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates>", "export NO_PROXY=<comma_separated_list_of_hosts_and_domains>", "http_listen_addr: \"\" introspection_addr: \"\" log_level: \"\" tls: {} indexer: connstring: \"\" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: \"\" indexer_addr: \"\" migrations: false period: \"\" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: \"\" migrations: false indexer_addr: \"\" matcher_addr: \"\" poll_interval: \"\" delivery_interval: \"\" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: \"\" probability: null jaeger: agent: endpoint: \"\" collector: endpoint: \"\" username: null password: null service_name: \"\" tags: nil buffer_max: 0 metrics: name: \"\" prometheus: endpoint: null dogstatsd: url: \"\"", "http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info", "indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true", "matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2", "matchers: names: - \"alpine-matcher\" - \"aws\" - \"debian\" - \"oracle\"", "updaters: sets: - rhel config: rhel: ignore_unpatched: false", "notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" headers: \"\" amqp: null stomp: null", "notifier: webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\"", "notifier: amqp: exchange: name: \"\" type: \"direct\" durable: true auto_delete: false uris: [\"amqp://user:pass@host:10000/vhost\"] direct: false routing_key: \"notifications\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"", "notifier: stomp: desitnation: \"notifications\" direct: false callback: \"http://clair-notifier/notifier/api/v1/notifications\" login: login: \"username\" passcode: \"passcode\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"", "auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: [\"quay\"]", "trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\"", "metrics: name: \"prometheus\" prometheus: endpoint: \"/metricsz\"" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/configure_red_hat_quay/clair-vulnerability-scanner
4.2. Disk
4.2. Disk The following sections showcase scripts that monitor disk and I/O activity. 4.2.1. Summarizing Disk Read/Write Traffic This section describes how to identify which processes are performing the heaviest disk reads/writes to the system. disktop.stp disktop.stp outputs the top ten processes responsible for the heaviest reads/writes to disk. Example 4.5, "disktop.stp Sample Output" displays a sample output for this script, and includes the following data per listed process: UID - user ID. A user ID of 0 refers to the root user. PID - the ID of the listed process. PPID - the process ID of the listed process's parent process . CMD - the name of the listed process. DEVICE - which storage device the listed process is reading from or writing to. T - the type of action performed by the listed process; W refers to write, while R refers to read. BYTES - the amount of data read to or written from disk. The time and date in the output of disktop.stp is returned by the functions ctime() and gettimeofday_s() . ctime() derives calendar time in terms of seconds passed since the Unix epoch (January 1, 1970). gettimeofday_s() counts the actual number of seconds since Unix epoch, which gives a fairly accurate human-readable timestamp for the output. In this script, the USDreturn is a local variable that stores the actual number of bytes each process reads or writes from the virtual file system. USDreturn can only be used in return probes (for example vfs.read.return and vfs.read.return ). Example 4.5. disktop.stp Sample Output
[ "#!/usr/bin/stap # Copyright (C) 2007 Oracle Corp. # Get the status of reading/writing disk every 5 seconds, output top ten entries # This is free software,GNU General Public License (GPL); either version 2, or (at your option) any later version. # Usage: ./disktop.stp # global io_stat,device global read_bytes,write_bytes probe vfs.read.return { if (USDreturn>0) { if (devname!=\"N/A\") {/*skip read from cache*/ io_stat[pid(),execname(),uid(),ppid(),\"R\"] += USDreturn device[pid(),execname(),uid(),ppid(),\"R\"] = devname read_bytes += USDreturn } } } probe vfs.write.return { if (USDreturn>0) { if (devname!=\"N/A\") { /*skip update cache*/ io_stat[pid(),execname(),uid(),ppid(),\"W\"] += USDreturn device[pid(),execname(),uid(),ppid(),\"W\"] = devname write_bytes += USDreturn } } } probe timer.ms(5000) { /* skip non-read/write disk */ if (read_bytes+write_bytes) { printf(\"\\n%-25s, %-8s%4dKb/sec, %-7s%6dKb, %-7s%6dKb\\n\\n\", ctime(gettimeofday_s()), \"Average:\", ((read_bytes+write_bytes)/1024)/5, \"Read:\",read_bytes/1024, \"Write:\",write_bytes/1024) /* print header */ printf(\"%8s %8s %8s %25s %8s %4s %12s\\n\", \"UID\",\"PID\",\"PPID\",\"CMD\",\"DEVICE\",\"T\",\"BYTES\") } /* print top ten I/O */ foreach ([process,cmd,userid,parent,action] in io_stat- limit 10) printf(\"%8d %8d %8d %25s %8s %4s %12d\\n\", userid,process,parent,cmd, device[process,cmd,userid,parent,action], action,io_stat[process,cmd,userid,parent,action]) /* clear data */ delete io_stat delete device read_bytes = 0 write_bytes = 0 } probe end{ delete io_stat delete device delete read_bytes delete write_bytes }", "[...] Mon Sep 29 03:38:28 2008 , Average: 19Kb/sec, Read: 7Kb, Write: 89Kb UID PID PPID CMD DEVICE T BYTES 0 26319 26294 firefox sda5 W 90229 0 2758 2757 pam_timestamp_c sda5 R 8064 0 2885 1 cupsd sda5 W 1678 Mon Sep 29 03:38:38 2008 , Average: 1Kb/sec, Read: 7Kb, Write: 1Kb UID PID PPID CMD DEVICE T BYTES 0 2758 2757 pam_timestamp_c sda5 R 8064 0 2885 1 cupsd sda5 W 1678" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/mainsect-disk
Installing on IBM Z and IBM LinuxONE
Installing on IBM Z and IBM LinuxONE OpenShift Container Platform 4.16 Installing OpenShift Container Platform on IBM Z and IBM LinuxONE Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_z_and_ibm_linuxone/index
Chapter 3. Commonly required logs for troubleshooting
Chapter 3. Commonly required logs for troubleshooting Some of the commonly used logs for troubleshooting OpenShift Data Foundation are listed, along with the commands to generate them. Generating logs for a specific pod: Generating logs for Ceph or OpenShift Data Foundation cluster: Important Currently, the rook-ceph-operator logs do not provide any information about the failure and this acts as a limitation in troubleshooting issues, see Enabling and disabling debug logs for rook-ceph-operator . Generating logs for plugin pods like cephfs or rbd to detect any problem in the PVC mount of the app-pod: To generate logs for all the containers in the CSI pod: Generating logs for cephfs or rbd provisioner pods to detect problems if PVC is not in BOUND state: To generate logs for all the containers in the CSI pod: Generating OpenShift Data Foundation logs using cluster-info command: When using Local Storage Operator, generating logs can be done using cluster-info command: Check the OpenShift Data Foundation operator logs and events. To check the operator logs : <ocs-operator> To check the operator events : Get the OpenShift Data Foundation operator version and channel. Example output : Example output : Confirm that the installplan is created. Verify the image of the components post updating OpenShift Data Foundation. Check the node on which the pod of the component you want to verify the image is running. For Example : Example output: dell-r440-12.gsslab.pnq2.redhat.com is the node-name . Check the image ID. <node-name> Is the name of the node on which the pod of the component you want to verify the image is running. For Example : Take a note of the IMAGEID and map it to the Digest ID on the Rook Ceph Operator page. Additional resources Using must-gather 3.1. Adjusting verbosity level of logs The amount of space consumed by debugging logs can become a significant issue. Red Hat OpenShift Data Foundation offers a method to adjust, and therefore control, the amount of storage to be consumed by debugging logs. In order to adjust the verbosity levels of debugging logs, you can tune the log levels of the containers responsible for container storage interface (CSI) operations. In the container's yaml file, adjust the following parameters to set the logging levels: CSI_LOG_LEVEL - defaults to 5 CSI_SIDECAR_LOG_LEVEL - defaults to 1 The supported values are 0 through 5 . Use 0 for general useful logs, and 5 for trace level verbosity.
[ "oc logs <pod-name> -n <namespace>", "oc logs rook-ceph-operator-<ID> -n openshift-storage", "oc logs csi-cephfsplugin-<ID> -n openshift-storage -c csi-cephfsplugin", "oc logs csi-rbdplugin-<ID> -n openshift-storage -c csi-rbdplugin", "oc logs csi-cephfsplugin-<ID> -n openshift-storage --all-containers", "oc logs csi-rbdplugin-<ID> -n openshift-storage --all-containers", "oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage -c csi-cephfsplugin", "oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage -c csi-rbdplugin", "oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage --all-containers", "oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage --all-containers", "oc cluster-info dump -n openshift-storage --output-directory=<directory-name>", "oc cluster-info dump -n openshift-local-storage --output-directory=<directory-name>", "oc logs <ocs-operator> -n openshift-storage", "oc get pods -n openshift-storage | grep -i \"ocs-operator\" | awk '{print USD1}'", "oc get events --sort-by=metadata.creationTimestamp -n openshift-storage", "oc get csv -n openshift-storage", "NAME DISPLAY VERSION REPLACES PHASE mcg-operator.v4.14.0 NooBaa Operator 4.14.0 Succeeded ocs-operator.v4.14.0 OpenShift Container Storage 4.14.0 Succeeded odf-csi-addons-operator.v4.14.0 CSI Addons 4.14.0 Succeeded odf-operator.v4.14.0 OpenShift Data Foundation 4.14.0 Succeeded", "oc get subs -n openshift-storage", "NAME PACKAGE SOURCE CHANNEL mcg-operator-stable-4.14-redhat-operators-openshift-marketplace mcg-operator redhat-operators stable-4.14 ocs-operator-stable-4.14-redhat-operators-openshift-marketplace ocs-operator redhat-operators stable-4.14 odf-csi-addons-operator odf-csi-addons-operator redhat-operators stable-4.14 odf-operator odf-operator redhat-operators stable-4.14", "oc get installplan -n openshift-storage", "oc get pods -o wide | grep <component-name>", "oc get pods -o wide | grep rook-ceph-operator", "rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 dell-r440-12.gsslab.pnq2.redhat.com <none> <none> <none> <none>", "oc debug node/<node name>", "chroot /host", "crictl images | grep <component>", "crictl images | grep rook-ceph" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/troubleshooting_openshift_data_foundation/commonly-required-logs_rhodf
probe::socket.readv
probe::socket.readv Name probe::socket.readv - Receiving a message via sock_readv Synopsis Values protocol Protocol value flags Socket flags value name Name of this probe state Socket state value size Message size in bytes type Socket type value family Protocol family value Context The message sender Description Fires at the beginning of receiving a message on a socket via the sock_readv function
[ "socket.readv" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-socket-readv
Chapter 4. Unsupported and deprecated features
Chapter 4. Unsupported and deprecated features Cryostat 2.3 removes some features because of their high maintenance costs, low community interest, and better alternative solutions. Removed static Kubernetes environment variable-based target discovery In Cryostat 2.3, the io.cryostat.platform.internal.KubeEnvPlatformStrategy value is removed as an option for the CRYOSTAT_PLATFORM environment variable.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.3/unsupported-deprecated-features_cryostat
Administration Guide
Administration Guide Red Hat Directory Server 11 Basic and advanced administration of Directory Server
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/index
Storage APIs
Storage APIs OpenShift Container Platform 4.14 Reference guide for storage APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage_apis/index
Chapter 29. Service Discovery
Chapter 29. Service Discovery With the Service Discovery feature provided by 3scale, you can import API services from OpenShift. 29.1. About Service Discovery When Service Discovery is configured, 3scale scans for discoverable API services that are running in the same OpenShift cluster and automatically imports the associated API definitions into 3scale. Additionally, 3scale can update the API integration and its specification, based on OpenAPI Specification (OAS), to resynchronize them with the cluster. Service Discovery offers the following features: Uses the cluster API to query for services that are properly annotated for discovery. Configures 3scale to access the service using an internal endpoint inside the cluster. Imports the API service specification as a 3scale ActiveDocs. Supports OpenShift, Red Hat single sign-on and Red Hat build of Keycloak authorization flows. Works with Red Hat Fuse, starting with Fuse version 7.2. When you import a discoverable service, it keeps its namespace within the project it belongs to. The imported service becomes a new customer-facing API, product, and its corresponding internal API, backend. For 3scale on premises, the 3scale API provider may have its own namespace and services. Discovered services can co-exist with 3scale existing and native services. Fuse discoverable services are deployed to the Fuse production namespace. 29.2. Criteria for a discoverable service If you want to have 3scale discover an API service in an OpenShift (OCP) cluster, said OCP service must meet the criteria for each element below: Content-Type header The API specification's Content-Type header must be one of the following values: application/swagger+json application/vnd.oai.openapi+json application/json OpenShift Service Object YAML definition The OpenShift Service Object YAML definition must include the following metadata: The discovery.3scale.net label: (required) Set to "true". 3scale uses this label when it executes the selector definition to find all services that need discovery. The following annotations: discovery.3scale.net/discovery-version : (optional) The version of the 3scale discovery process. discovery.3scale.net/scheme : (required) The scheme part of the URL where the service is hosted. Possible values are "http" or "https". discovery.3scale.net/port : (required) The port number of the service within the cluster. discovery.3scale.net/path : (optional) The relative base path of the URL where the service is hosted. You can omit this annotation when the path is at root, "/". discovery.3scale.net/description-path : The path to the OpenAPI service description document for the service. For example: If you are an OpenShift user with administration privileges, you can view the API service's YAML file in the OpenShift Console: Select Applications> Services . Select the service, for example i-task-api , to open its Details page. Select Actions> Edit YAML to open the YAML file. When you have finished viewing it, select Cancel . Clusters with the ovs-networkpolicy plugin To allow traffic between the OpenShift and 3scale projects, clusters that have the ovs-networkpolicy plugin require NetworkPolicy objects created within their application project. For more information about configuring a NetworkPolicy object, see About network policy . 29.3. Considerations for configuring OpenShift to enable Service Discovery As a 3scale administrator, you have two options to configure Service Discovery: with or without an Open Authorization (OAuth) server. If you configure 3scale Service Discovery with an OAuth server, this is what happens when a user signs in to 3scale: The user is redirected to the OAuth Server. If the user is not already logged in to the OAuth Server, the user is prompted to log in. If it is the first time that the user implements 3scale Service Discovery with single sign-on (SSO), the OAuth server prompts for authorization to perform the relevant actions. The user is redirected back to 3scale. To configure Service Discovery with an OAuth server, you have the following options: Configuring Service Discovery with an OpenShift OAuth server Configuring Service Discovery with a Red Hat single sign-on server (Keycloak) If you configure Service Discovery without an OAuth server , when a user signs in to 3scale, the user is not redirected. Instead, the 3scale Single Service Account provides a seamless authentication to the cluster for the Service Discovery. All 3scale tenant administration users have the same access level to the cluster while discovering API services through 3scale. 29.4. Configuring Service Discovery with an OpenShift OAuth server As a 3scale system administrator, allow users to individually authenticate and authorize 3scale to discover APIs by using OpenShift built-in OAuth server. Prerequisites You must deploy 3scale 2.15 to an OpenShift Container Platform (OCP) 4.x cluster. 3scale users that want to use Service Discovery in 3scale must have access to the OpenShift cluster. Procedure Create an OpenShift OAuth client for 3scale. For more details, see the OpenShift Authentication documentation . In the following example, replace <provide-a-client-secret> with a secret that you generate and replace <3scale-master-domain-route> with the URL to access the 3scale Master Admin Portal: USD oc project default USD cat <<-EOF | oc create -f - kind: OAuthClient apiVersion: v1 metadata: name: 3scale secret: "<provide-a-client-secret>" redirectURIs: - "<3scale-master-domain-route>" grantMethod: prompt EOF Open the 3scale Service Discovery settings file: USD oc project <3scale-project> USD oc edit configmap system Configure the following settings: service_discovery.yml: production: enabled: true authentication_method: oauth oauth_server_type: builtin client_id: '3scale' client_secret: '<choose-a-client-secret>' Ensure that users have proper permissions to view cluster projects containing discoverable services. To give an administrator user, represented by <user> , the view permission for the <namespace> project containing a service to be discovered, use this command: USD oc adm policy add-role-to-user view <user> -n <namespace> After modifying configmap , you must redeploy the system-app and system-sidekiq pods to apply the changes: USD oc rollout restart deployment/system-app USD oc rollout restart deployment/system-sidekiq Check the status of the rollout to ensure it has finished: USD oc rollout status deployment/system-app USD oc rollout status deployment/system-sidekiq Additional resources For more information about OpenShift OAuth tokens, see Configuring the internal OAuth server . 29.5. Configuring Service Discovery with a Red Hat single sign-on server (Keycloak) As a system administrator, allow users to individually authenticate and authorize 3scale to discover services by using Red Hat single sign-on for OpenShift . For an example about configuring OpenShift to use the Red Hat single sign-on deployment as the authorization gateway for OpenShift, you can refer to this workflow . Prerequisites You must deploy 3scale 2.15 to an OpenShift Container Platform (OCP) 4.x cluster. 3scale users that want to use Service Discovery in 3scale must have access to the OpenShift cluster. Procedure Create an OAuth client for 3scale in Red Hat OAuth server (Keycloak). Note In the client configuration, verify that the username maps to preferred_username , so that OpenShift can link accounts. Edit 3scale Service Discovery settings: USD oc project <3scale-project> USD oc edit configmap system Verify that the following settings are configured, where `<the-client-secret-from-Keycloak> is the value that Keycloak generated automatically when you created the OAuth client: service_discovery.yml: production: enabled: true authentication_method: oauth oauth_server_type: rh_sso client_id: '3scale' client_secret: '<the-client-secret-from-Keycloak>' Make sure that users have proper permissions to view cluster projects containing discoverable services. For example, to give <user> view permission for the <namespace> project, use this command: USD oc adm policy add-role-to-user view <user> -n <namespace> After modifying configmap , you must redeploy the system-app and system-sidekiq pods to apply the changes. Additional resources Token lifespan: By default, session tokens expire after one minute, as indicated in Keycloak - Session and Token Timeouts . However, it is recommended to set the timeout to an acceptable value of one day. 29.6. Configuring Service Discovery without an OAuth server To configure the 3scale Service Discovery without an OAuth server, you can use 3scale Single Service Account to authenticate to OpenShift API service. Prerequisites You must deploy 3scale 2.15 to an OpenShift Container Platform (OCP) 4.x cluster. 3scale users that want to use Service Discovery in 3scale must have access to the OpenShift cluster. Procedure Verify that the 3scale project is the current project. USD oc project <3scale-project> Open the 3scale Service Discovery settings in an editor. USD oc edit configmap system Verify that the following settings are configured. service_discovery.yml: production: enabled: <%= cluster_token_file_exists = File.exists?(cluster_token_file_path = '/var/run/secrets/kubernetes.io/serviceaccount/token') %> bearer_token: "<%= File.read(cluster_token_file_path) if cluster_token_file_exists %>" authentication_method: service_account Provide the 3scale deployment amp service account with the relevant permissions to view projects containing discoverable services by following one of these options: Grant the 3scale deployment amp service account with view cluster level permission. USD oc adm policy add-cluster-role-to-user view system:serviceaccount:<3scale-project>:amp Apply a more restrictive policy as described in OpenShift - Service Accounts . 29.7. Importing discovered services From the OpenShift cluster, import a new API service that conforms to the OpenAPI Specification. This API is managed with 3scale. Prerequisites The OpenShift administrator has configured Service Discovery for the OpenShift cluster. For example, the OpenShift administrator must have enabled 3scale discovery by editing the Fuse Online custom resource to specify the URL for their 3scale user interface. The 3scale administrator has configured the 3scale deployment for Service Discovery as described in About Service Discovery . The 3scale administrator has granted your 3scale user or service account (depending on the configured authentication mode) the necessary privileges to view the API service and its namespace. For more details, you can see Authorizing 3scale access to an OpenShift project . The API has the correct annotations that enable Service Discovery, as described in Criteria for a discoverable service . The API service is deployed on the same OpenShift cluster where 3scale is installed. You know the API's service name and its namespace (OpenShift project). Procedure Log in to the 3scale Admin Portal. From APIs on the Admin Portal Dashboard, click Create Product . Click Import from OpenShift . If the OAuth token is not valid, the OpenShift project administrator should authorize access to the 3scale user as described in Authorizing 3scale access to an OpenShift project . In the Namespace field, specify or select the OpenShift project that contains the API, for example fuse . In the Name field, type or select the name of an OpenShift service within that namespace, for example i-task-api . Click Create Product . Wait for the new API service to be asynchronously imported into 3scale. A message appears in the upper right section of the Admin Portal: The service will be imported shortly. You will receive a notification when it is done. Additional resources See the Red Hat 3scale API Management documentation for information about managing the API. 29.8. Authorizing 3scale access to an OpenShift project As an OpenShift project administrator, you can authorize a 3scale user to access a namespace when the OAuth token is not valid. Prerequisites You need to have the credentials as an OpenShift project administrator. The OpenShift administrator has configured Service Discovery for the OpenShift cluster. For example, for Fuse Online APIs, the OpenShift administrator must set the Fuse Online service's CONTROLLERS_EXPOSE_VIA3SCALE environment variable to true . The 3scale administrator has configured the 3scale deployment for Service Discovery as described in Criteria for a discoverable service . You know the API service name and its namespace of the OpenShift project. The API service is deployed on the same OpenShift cluster where 3scale is installed. The API has the correct annotations that enable Service Discovery, as described in Criteria for a discoverable service . Procedure Click the Authenticate to enable this option link. Log in to OpenShift using the namespace administrator credentials. Authorize access to the 3scale user, by clicking Allow selected permissions . Additional resources See the Red Hat 3scale API Management documentation for information about managing the API. 29.9. Updating services You can update (refresh) an existing API service in 3scale with the current definitions for the service in the cluster. Prerequisites The service was previously imported from the cluster, as described in Importing discovered services . Procedure Log in to 3scale Admin Portal. Navigate to the Overview page of the API product. Click the Refresh link, to Source: OpenSource. Wait for the new API service to be asynchronously imported into 3scale.
[ "metadata: annotations: discovery.3scale.net/scheme: \"https\" discovery.3scale.net/port: '8081' discovery.3scale.net/path: \"/api\" discovery.3scale.net/description-path: \"/api/openapi/json\" labels: discovery.3scale.net: \"true\" name: i-task-api namespace: fuse", "oc project default USD cat <<-EOF | oc create -f - kind: OAuthClient apiVersion: v1 metadata: name: 3scale secret: \"<provide-a-client-secret>\" redirectURIs: - \"<3scale-master-domain-route>\" grantMethod: prompt EOF", "oc project <3scale-project> oc edit configmap system", "service_discovery.yml: production: enabled: true authentication_method: oauth oauth_server_type: builtin client_id: '3scale' client_secret: '<choose-a-client-secret>'", "oc adm policy add-role-to-user view <user> -n <namespace>", "oc rollout restart deployment/system-app oc rollout restart deployment/system-sidekiq", "oc rollout status deployment/system-app oc rollout status deployment/system-sidekiq", "oc project <3scale-project> oc edit configmap system", "service_discovery.yml: production: enabled: true authentication_method: oauth oauth_server_type: rh_sso client_id: '3scale' client_secret: '<the-client-secret-from-Keycloak>'", "oc adm policy add-role-to-user view <user> -n <namespace>", "oc project <3scale-project>", "oc edit configmap system", "service_discovery.yml: production: enabled: <%= cluster_token_file_exists = File.exists?(cluster_token_file_path = '/var/run/secrets/kubernetes.io/serviceaccount/token') %> bearer_token: \"<%= File.read(cluster_token_file_path) if cluster_token_file_exists %>\" authentication_method: service_account", "oc adm policy add-cluster-role-to-user view system:serviceaccount:<3scale-project>:amp" ]
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/service-discovery_service-discovery
13.2.3. Keeping Quotas Accurate
13.2.3. Keeping Quotas Accurate Whenever a file system is not unmounted cleanly (due to a system crash, for example), it is necessary to run quotacheck . However, quotacheck can be run on a regular basis, even if the system has not crashed. Running the following command periodically keeps the quotas more accurate (the options used have been described in Section 13.1.1, "Enabling Quotas" ): The easiest way to run it periodically is to use cron . As root, either use the crontab -e command to schedule a periodic quotacheck or place a script that runs quotacheck in any one of the following directories (using whichever interval best matches your needs): /etc/cron.hourly /etc/cron.daily /etc/cron.weekly /etc/cron.monthly The most accurate quota statistics can be obtained when the file system(s) analyzed are not in active use. Thus, the cron task should be schedule during a time where the file system(s) are used the least. If this time is various for different file systems with quotas, run quotacheck for each file system at different times with multiple cron tasks. Refer to Chapter 34, Automated Tasks for more information about configuring cron .
[ "quotacheck -avug" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/managing_disk_quotas-keeping_quotas_accurate
Chapter 5. Setting the number of Directory Server threads
Chapter 5. Setting the number of Directory Server threads The number of threads Directory Server uses to handle simultaneous connections affects the performance of the server. For example, if all threads are busy handling time-consuming tasks, such as add operations, new incoming connections are queued until a free thread can process the request. If the server provides a low number of CPU threads, configuring a higher number of threads can increase the performance. However, on a server with many CPU threads, setting a too high value does not further increase the performance. By default, Directory Server uses an auto-tuning setting that calculates the number of threads. This number is based on the hardware resources of the server when the instance starts. Warning Avoid setting the number of threads manually. Use the auto-tuning setting instead. With enabled automatic thread tuning, Directory Server uses the following optimized number of threads: CPU threads number Directory Server threads number 1-16 16 17-512 The Directory Server thread number matches the CPU thread number in the system. For example, if your system has 24 CPU threads, Directory Server uses 24 threads. The maximum number of Directory Server threads is 512. 512 and more 512. Directory Server applies the recommended maximum number of threads. 5.1. Enabling automatic thread tuning using the command line By default, Directory Server automatically sets the number of threads based on the available hardware. However, in certain cases, you can manually enable this auto-tuning feature by using the command line. Procedure To enable the auto-tuning feature, set the nsslapd-threadnumber attribute value to -1 by the command: Verification Verify the number of treads that Directory Server now uses by the command: Note The command retrieves the number of threads that Directory Server calculated based on the correct hardware resources. Additional resources The nsslapd-threadnumber attribute description . 5.2. Enabling automatic thread tuning using the web console By default, Directory Server automatically sets the number of threads based on the available hardware. However, in certain cases, you can manually enable this auto-tuning feature by using the web console. Prerequisites You are logged in to the instance in the web console. For more details, see Logging in to the Directory Server by using the web console . Procedure Navigate to Server Tuning & Limits . In the Number Of Worker Threads field, set the number of threads to -1 . Click Save Settings . Additional resources The nsslapd-threadnumber attribute description . 5.3. Manually setting the number of threads using the command line In certain situations, it is necessary to manually set a fixed number of Directory Server threads. For example, if you do not use the auto-tuning setting and change the number of CPU cores in a virtual machine, adjusting the number of Directory Server threads can improve the performance. You can also use this procedure to re-enable the auto-tuning setting if you set a specific number of threads earlier. Procedure Set the number of threads Directory Server should use: # dsconf -D " cn=Directory Manager " ldap://server.example.com config replace nsslapd-threadnumber=" 64 " Successfully replaced "nsslapd-threadnumber" Set the nsslapd-threadnumber parameter to -1 to enable the auto-tuning setting. 5.4. Manually setting the number of threads using the web console In certain situations, it is necessary to manually set a fix number of Directory Server threads. For example, if you do not use the auto-tuning setting and change the number of CPU cores in a virtual machine, adjusting the number of Directory Server threads can improve the performance. Note that you can use the web console to re-enable the auto-tuning setting if you set a specific number of threads earlier. Prerequisites You are logged in to the instance in the web console. Procedure Navigate to Server Tuning & Limits . In the Number Of Worker Threads field, set the number of threads. Click Save Settings .
[ "dsconf -D \"cn=Directory Manager\" ldap:// server.example.com config replace nsslapd-threadnumber=\"-1\" Successfully replaced \"nsslapd-threadnumber\"", "dsconf -D \"cn=Directory Manager\" ldap:// server.example.com config get nsslapd-threadnumber nsslapd-threadnumber: 16", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com config replace nsslapd-threadnumber=\" 64 \" Successfully replaced \"nsslapd-threadnumber\"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/tuning_the_performance_of_red_hat_directory_server/assembly_setting-the-number-of-directory-server-threads_assembly_improving-the-performance-of-views
13.2. Setting the NIS Port for Identity Management
13.2. Setting the NIS Port for Identity Management The IdM server binds to its NIS services over a random port that is selected when the server starts. It sends that port assignment to the portmapper so that NIS clients know what port to use to contact the IdM server. Administrators may need to open a firewall for NIS clients or may have other services that need to know the port number in advance and need that port number to remain the same. In that case, an administrator can specify the port to use. Note Any available port number below 1024 can be used for the NIS Plug-in setting. The NIS configuration is in the NIS Plug-in in Identity Management's internal Directory Server instance. To specify the port: Enable the NIS listener and compatibility plug-ins: Edit the plug-in configuration and add the port number as an argument. For example, to set the port to 514: Restart the Directory Server to load the new plug-in configuration.
[ "ipa-nis-manage enable ipa-compat-manage enable", "ldapmodify -x -D 'cn=directory manager' -w secret dn: cn=NIS Server,cn=plugins,cn=config changetype: modify add: nsslapd-pluginarg0 nsslapd-pluginarg0: 514 modifying entry \"cn=NIS Server,cn=plugins,cn=config\"", "service dirsrv restart" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/nis-port
Chapter 17. Using the Red Hat Marketplace
Chapter 17. Using the Red Hat Marketplace The Red Hat Marketplace is an open cloud marketplace that makes it easy to discover and access certified software for container-based environments that run on public clouds and on-premises. 17.1. Red Hat Marketplace features Cluster administrators can use the Red Hat Marketplace to manage software on OpenShift Container Platform, give developers self-service access to deploy application instances, and correlate application usage against a quota. 17.1.1. Connect OpenShift Container Platform clusters to the Marketplace Cluster administrators can install a common set of applications on OpenShift Container Platform clusters that connect to the Marketplace. They can also use the Marketplace to track cluster usage against subscriptions or quotas. Users that they add by using the Marketplace have their product usage tracked and billed to their organization. During the cluster connection process , a Marketplace Operator is installed that updates the image registry secret, manages the catalog, and reports application usage. 17.1.2. Install applications Cluster administrators can install Marketplace applications from within OperatorHub in OpenShift Container Platform, or from the Marketplace web application . You can access installed applications from the web console by clicking Operators > Installed Operators . 17.1.3. Deploy applications from different perspectives You can deploy Marketplace applications from the web console's Administrator and Developer perspectives. The Developer perspective Developers can access newly installed capabilities by using the Developer perspective. For example, after a database Operator is installed, a developer can create an instance from the catalog within their project. Database usage is aggregated and reported to the cluster administrator. This perspective does not include Operator installation and application usage tracking. The Administrator perspective Cluster administrators can access Operator installation and application usage information from the Administrator perspective. They can also launch application instances by browsing custom resource definitions (CRDs) in the Installed Operators list.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/building_applications/red-hat-marketplace
Chapter 2. Installing OpenShift on a single node
Chapter 2. Installing OpenShift on a single node You can install single-node OpenShift by using either the web-based Assisted Installer or the coreos-installer tool to generate a discovery ISO image. The discovery ISO image writes the Red Hat Enterprise Linux CoreOS (RHCOS) system configuration to the target installation disk, so that you can run a single-cluster node to meet your needs. Consider using single-node OpenShift when you want to run a cluster in a low-resource or an isolated environment for testing, troubleshooting, training, or small-scale project purposes. 2.1. Installing single-node OpenShift using the Assisted Installer To install OpenShift Container Platform on a single node, use the web-based Assisted Installer wizard to guide you through the process and manage the installation. 2.1.1. Generating the discovery ISO with the Assisted Installer Installing OpenShift Container Platform on a single node requires a discovery ISO, which the Assisted Installer can generate. Procedure On the administration host, open a browser and navigate to Red Hat OpenShift Cluster Manager . Click Create New Cluster to create a new cluster. In the Cluster name field, enter a name for the cluster. In the Base domain field, enter a base domain. For example: All DNS records must be subdomains of this base domain and include the cluster name, for example: Note You cannot change the base domain or cluster name after cluster installation. Select Install single node OpenShift (SNO) and complete the rest of the wizard steps. Download the discovery ISO. Complete the remaining Assisted Installer wizard steps. Important Ensure that you take note of the discovery ISO URL for installing with virtual media. If you enable OpenShift Virtualization during this process, you must have a second local storage device of at least 50GiB for your virtual machines. Additional resources Persistent storage using logical volume manager storage What you can do with OpenShift Virtualization 2.1.2. Installing single-node OpenShift with the Assisted Installer Use the Assisted Installer to install the single-node cluster. Prerequisites Ensure that the boot drive order in the server BIOS settings defaults to booting the server from the target installation disk. Procedure Attach the discovery ISO image to the target host. Boot the server from the discovery ISO image. The discovery ISO image writes the system configuration to the target installation disk and automatically triggers a server restart. On the administration host, return to the browser. Wait for the host to appear in the list of discovered hosts. If necessary, reload the Assisted Clusters page and select the cluster name. Complete the install wizard steps. Add networking details, including a subnet from the available subnets. Add the SSH public key if necessary. Monitor the installation's progress. Watch the cluster events. After the installation process finishes writing the operating system image to the server's hard disk, the server restarts. Optional: Remove the discovery ISO image. The server restarts several times automatically, deploying the control plane. Additional resources Creating a bootable ISO image on a USB drive Booting from an HTTP-hosted ISO image using the Redfish API Adding worker nodes to single-node OpenShift clusters 2.2. Installing single-node OpenShift manually To install OpenShift Container Platform on a single node, first generate the installation ISO, and then boot the server from the ISO. You can monitor the installation using the openshift-install installation program. 2.2.1. Generating the installation ISO with coreos-installer Installing OpenShift Container Platform on a single node requires an installation ISO, which you can generate with the following procedure. Prerequisites Install podman . Procedure Set the OpenShift Container Platform version: USD export OCP_VERSION=<ocp_version> 1 1 Replace <ocp_version> with the current version, for example, latest-4.12 Set the host architecture: USD export ARCH=<architecture> 1 1 Replace <architecture> with the target host architecture, for example, aarch64 or x86_64 . Download the OpenShift Container Platform client ( oc ) and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz USD tar zxf oc.tar.gz USD chmod +x oc Download the OpenShift Container Platform installer and make it available for use by entering the following commands: USD curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz USD tar zxvf openshift-install-linux.tar.gz USD chmod +x openshift-install Retrieve the RHCOS ISO URL by running the following command: USD export ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\" -f4) Download the RHCOS ISO: USD curl -L USDISO_URL -o rhcos-live.iso Prepare the install-config.yaml file: apiVersion: v1 baseDomain: <domain> 1 compute: - architecture: amd64 2 name: worker replicas: 0 3 controlPlane: architecture: amd64 name: master replicas: 1 4 metadata: name: <name> 5 networking: 6 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 7 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 8 pullSecret: '<pull_secret>' 9 sshKey: | <ssh_key> 10 1 Add the cluster domain name. 2 Set the architecture to arm64 for 64-bit ARM or amd64 for 64-bit x86 architectures. This needs to be set explicitly to the target host architecture. 3 Set the compute replicas to 0 . This makes the control plane node schedulable. 4 Set the controlPlane replicas to 1 . In conjunction with the compute setting, this setting ensures the cluster runs on a single node. 5 Set the metadata name to the cluster name. 6 Set the networking details. OVN-Kubernetes is the only allowed network plugin type for single-node clusters. 7 Set the cidr value to match the subnet of the single-node OpenShift cluster. 8 Set the path to the installation disk drive, for example, /dev/disk/by-id/wwn-0x64cd98f04fde100024684cf3034da5c2 . 9 Copy the pull secret from the Red Hat OpenShift Cluster Manager and add the contents to this configuration setting. 10 Add the public SSH key from the administration host so that you can log in to the cluster after installation. Generate OpenShift Container Platform assets by running the following commands: USD mkdir ocp USD cp install-config.yaml ocp USD ./openshift-install --dir=ocp create single-node-ignition-config Embed the ignition data into the RHCOS ISO by running the following commands: USD alias coreos-installer='podman run --privileged --pull always --rm \ -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data \ -w /data quay.io/coreos/coreos-installer:release' USD coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso Additional resources See Enabling cluster capabilities for more information about enabling cluster capabilities that were disabled prior to installation. See Optional cluster capabilities in OpenShift Container Platform OpenShift Container Platform 4.12 for more information about the features provided by each capability. 2.2.2. Monitoring the cluster installation using openshift-install Use openshift-install to monitor the progress of the single-node cluster installation. Prerequisites Ensure that the boot drive order in the server BIOS settings defaults to booting the server from the target installation disk. Procedure Attach the discovery ISO image to the target host. Boot the server from the discovery ISO image. The discovery ISO image writes the system configuration to the target installation disk and automatically triggers a server restart. On the administration host, monitor the installation by running the following command: USD ./openshift-install --dir=ocp wait-for install-complete The server restarts several times while deploying the control plane. Verification After the installation is complete, check the environment by running the following command: USD export KUBECONFIG=ocp/auth/kubeconfig USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.25.0 Additional resources Creating a bootable ISO image on a USB drive Booting from an HTTP-hosted ISO image using the Redfish API Adding worker nodes to single-node OpenShift clusters 2.3. Creating a bootable ISO image on a USB drive You can install software using a bootable USB drive that contains an ISO image. Booting the server with the USB drive prepares the server for the software installation. Procedure On the administration host, insert a USB drive into a USB port. Create a bootable USB drive, for example: # dd if=<path_to_iso> of=<path_to_usb> status=progress where: <path_to_iso> is the relative path to the downloaded ISO file, for example, rhcos-live.iso . <path_to_usb> is the location of the connected USB drive, for example, /dev/sdb . After the ISO is copied to the USB drive, you can use the USB drive to install software on the server. 2.4. Booting from an HTTP-hosted ISO image using the Redfish API You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API. Prerequisites Download the installation Red Hat Enterprise Linux CoreOS (RHCOS) ISO. Procedure Copy the ISO file to an HTTP server accessible in your network. Boot the host from the hosted ISO file, for example: Call the redfish API to set the hosted ISO as the VirtualMedia boot media by running the following command: USD curl -k -u <bmc_username>:<bmc_password> -d '{"Image":"<hosted_iso_file>", "Inserted": true}' -H "Content-Type: application/json" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia Where: <bmc_username>:<bmc_password> Is the username and password for the target host BMC. <hosted_iso_file> Is the URL for the hosted installation ISO, for example: http://webserver.example.com/rhcos-live-minimal.iso . The ISO must be accessible from the target host machine. <host_bmc_address> Is the BMC IP address of the target host machine. Set the host to boot from the VirtualMedia device by running the following command: USD curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{"Boot": {"BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI", "BootSourceOverrideEnabled": "Once"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1 Reboot the host: USD curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "ForceRestart"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset Optional: If the host is powered off, you can boot it using the {"ResetType": "On"} switch. Run the following command: USD curl -k -u <bmc_username>:<bmc_password> -d '{"ResetType": "On"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset 2.5. Creating a custom live RHCOS ISO for remote server access In some cases, you cannot attach an external disk drive to a server, however, you need to access the server remotely to provision a node. It is recommended to enable SSH access to the server. You can create a live RHCOS ISO with SSHd enabled and with predefined credentials so that you can access the server after it boots. Prerequisites You installed the butane utility. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Download the latest live RHCOS ISO from mirror.openshift.com . Create the embedded.yaml file that the butane utility uses to create the Ignition file: variant: openshift version: 4.12.0 metadata: name: sshd labels: machineconfiguration.openshift.io/role: worker passwd: users: - name: core 1 ssh_authorized_keys: - '<ssh_key>' 1 The core user has sudo privileges. Run the butane utility to create the Ignition file using the following command: USD butane -pr embedded.yaml -o embedded.ign After the Ignition file is created, you can include the configuration in a new live RHCOS ISO, which is named rhcos-sshd-4.12.0-x86_64-live.x86_64.iso , with the coreos-installer utility: USD coreos-installer iso ignition embed -i embedded.ign rhcos-4.12.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.12.0-x86_64-live.x86_64.iso Verification Check that the custom live ISO can be used to boot the server by running the following command: # coreos-installer iso ignition show rhcos-sshd-4.12.0-x86_64-live.x86_64.iso Example output { "ignition": { "version": "3.2.0" }, "passwd": { "users": [ { "name": "core", "sshAuthorizedKeys": [ "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD [email protected]" ] } ] } }
[ "example.com", "<cluster_name>.example.com", "export OCP_VERSION=<ocp_version> 1", "export ARCH=<architecture> 1", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-client-linux.tar.gz -o oc.tar.gz", "tar zxf oc.tar.gz", "chmod +x oc", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz -o openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "export ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)", "curl -L USDISO_URL -o rhcos-live.iso", "apiVersion: v1 baseDomain: <domain> 1 compute: - architecture: amd64 2 name: worker replicas: 0 3 controlPlane: architecture: amd64 name: master replicas: 1 4 metadata: name: <name> 5 networking: 6 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 7 networkType: OVNKubernetes serviceNetwork: - 172.30.0.0/16 platform: none: {} bootstrapInPlace: installationDisk: /dev/disk/by-id/<disk_id> 8 pullSecret: '<pull_secret>' 9 sshKey: | <ssh_key> 10", "mkdir ocp", "cp install-config.yaml ocp", "./openshift-install --dir=ocp create single-node-ignition-config", "alias coreos-installer='podman run --privileged --pull always --rm -v /dev:/dev -v /run/udev:/run/udev -v USDPWD:/data -w /data quay.io/coreos/coreos-installer:release'", "coreos-installer iso ignition embed -fi ocp/bootstrap-in-place-for-live-iso.ign rhcos-live.iso", "./openshift-install --dir=ocp wait-for install-complete", "export KUBECONFIG=ocp/auth/kubeconfig", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane.example.com Ready master,worker 10m v1.25.0", "dd if=<path_to_iso> of=<path_to_usb> status=progress", "curl -k -u <bmc_username>:<bmc_password> -d '{\"Image\":\"<hosted_iso_file>\", \"Inserted\": true}' -H \"Content-Type: application/json\" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia", "curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\", \"BootSourceOverrideEnabled\": \"Once\"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1", "curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"ForceRestart\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset", "curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"On\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset", "variant: openshift version: 4.12.0 metadata: name: sshd labels: machineconfiguration.openshift.io/role: worker passwd: users: - name: core 1 ssh_authorized_keys: - '<ssh_key>'", "butane -pr embedded.yaml -o embedded.ign", "coreos-installer iso ignition embed -i embedded.ign rhcos-4.12.0-x86_64-live.x86_64.iso -o rhcos-sshd-4.12.0-x86_64-live.x86_64.iso", "coreos-installer iso ignition show rhcos-sshd-4.12.0-x86_64-live.x86_64.iso", "{ \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCZnG8AIzlDAhpyENpK2qKiTT8EbRWOrz7NXjRzopbPu215mocaJgjjwJjh1cYhgPhpAp6M/ttTk7I4OI7g4588Apx4bwJep6oWTU35LkY8ZxkGVPAJL8kVlTdKQviDv3XX12l4QfnDom4tm4gVbRH0gNT1wzhnLP+LKYm2Ohr9D7p9NBnAdro6k++XWgkDeijLRUTwdEyWunIdW1f8G0Mg8Y1Xzr13BUo3+8aey7HLKJMDtobkz/C8ESYA/f7HJc5FxF0XbapWWovSSDJrr9OmlL9f4TfE+cQk3s+eoKiz2bgNPRgEEwihVbGsCN4grA+RzLCAOpec+2dTJrQvFqsD [email protected]\" ] } ] } }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_a_single_node/install-sno-installing-sno
Chapter 4. Viewing application composition using the Topology view
Chapter 4. Viewing application composition using the Topology view The Topology view in the Developer perspective of the web console provides a visual representation of all the applications within a project, their build status, and the components and services associated with them. 4.1. Prerequisites To view your applications in the Topology view and interact with them, ensure that: You have logged in to the web console . You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. You have created and deployed an application on OpenShift Container Platform using the Developer perspective . You are in the Developer perspective . 4.2. Viewing the topology of your application You can navigate to the Topology view using the left navigation panel in the Developer perspective. After you deploy an application, you are directed automatically to the Graph view where you can see the status of the application pods, quickly access the application on a public URL, access the source code to modify it, and see the status of your last build. You can zoom in and out to see more details for a particular application. The Topology view provides you the option to monitor your applications using the List view. Use the List view icon ( ) to see a list of all your applications and use the Graph view icon ( ) to switch back to the graph view. You can customize the views as required using the following: Use the Find by name field to find the required components. Search results may appear outside of the visible area; click Fit to Screen from the lower-left toolbar to resize the Topology view to show all components. Use the Display Options drop-down list to configure the Topology view of the various application groupings. The options are available depending on the types of components deployed in the project: Mode ( Connectivity or Consumption ) Connectivity: Select to show all the connections between the different nodes in the topology. Consumption: Select to show the resource consumption for all nodes in the topology. Expand group Virtual Machines: Toggle to show or hide the virtual machines. Application Groupings: Clear to condense the application groups into cards with an overview of an application group and alerts associated with it. Helm Releases: Clear to condense the components deployed as Helm Release into cards with an overview of a given release. Knative Services: Clear to condense the Knative Service components into cards with an overview of a given component. Operator Groupings: Clear to condense the components deployed with an Operator into cards with an overview of the given group. Show elements based on Pod Count or Labels Pod Count: Select to show the number of pods of a component in the component icon. Labels: Toggle to show or hide the component labels. The Topology view also provides you the Export application option to download your application in the ZIP file format. You can then import the downloaded application to another project or cluster. For more details, see Exporting an application to another project or cluster in the Additional resources section. 4.3. Interacting with applications and components The Topology view in the Developer perspective of the web console provides the following options to interact with applications and components: Click Open URL ( ) to see your application exposed by the route on a public URL. Click Edit Source code to access your source code and modify it. Note This feature is available only when you create applications using the From Git , From Catalog , and the From Dockerfile options. Hover your cursor over the lower left icon on the pod to see the name of the latest build and its status. The status of the application build is indicated as New ( ), Pending ( ), Running ( ), Completed ( ), Failed ( ), and Canceled ( ). The status or phase of the pod is indicated by different colors and tooltips as: Running ( ): The pod is bound to a node and all of the containers are created. At least one container is still running or is in the process of starting or restarting. Not Ready ( ): The pods which are running multiple containers, not all containers are ready. Warning ( ): Containers in pods are being terminated, however termination did not succeed. Some containers may be other states. Failed ( ): All containers in the pod terminated but least one container has terminated in failure. That is, the container either exited with non-zero status or was terminated by the system. Pending ( ): The pod is accepted by the Kubernetes cluster, but one or more of the containers has not been set up and made ready to run. This includes time a pod spends waiting to be scheduled as well as the time spent downloading container images over the network. Succeeded ( ): All containers in the pod terminated successfully and will not be restarted. Terminating ( ): When a pod is being deleted, it is shown as Terminating by some kubectl commands. Terminating status is not one of the pod phases. A pod is granted a graceful termination period, which defaults to 30 seconds. Unknown ( ): The state of the pod could not be obtained. This phase typically occurs due to an error in communicating with the node where the pod should be running. After you create an application and an image is deployed, the status is shown as Pending . After the application is built, it is displayed as Running . Figure 4.1. Application topology The application resource name is appended with indicators for the different types of resource objects as follows: CJ : CronJob D : Deployment DC : DeploymentConfig DS : DaemonSet J : Job P : Pod SS : StatefulSet (Knative): A serverless application Note Serverless applications take some time to load and display on the Graph view . When you deploy a serverless application, it first creates a service resource and then a revision. After that, it is deployed and displayed on the Graph view . If it is the only workload, you might be redirected to the Add page. After the revision is deployed, the serverless application is displayed on the Graph view . 4.4. Scaling application pods and checking builds and routes The Topology view provides the details of the deployed components in the Overview panel. You can use the Overview and Resources tabs to scale the application pods, check build status, services, and routes as follows: Click on the component node to see the Overview panel to the right. Use the Overview tab to: Scale your pods using the up and down arrows to increase or decrease the number of instances of the application manually. For serverless applications, the pods are automatically scaled down to zero when idle and scaled up depending on the channel traffic. Check the Labels , Annotations , and Status of the application. Click the Resources tab to: See the list of all the pods, view their status, access logs, and click on the pod to see the pod details. See the builds, their status, access logs, and start a new build if needed. See the services and routes used by the component. For serverless applications, the Resources tab provides information on the revision, routes, and the configurations used for that component. 4.5. Adding components to an existing project Procedure Click Add to Project ( ) to left navigation pane or press Ctrl + Space Search for the component and select Create or press Enter to add the component to the project and see it in the topology Graph view . Figure 4.2. Adding component via quick search Alternatively, you can also use the Import from Git , Container Image , Database , From Catalog , Operator Backed , Helm Charts , Samples , or Upload JAR file options in the context menu by right-clicking in the topology Graph view to add a component to your project. Figure 4.3. Context menu to add services 4.6. Grouping multiple components within an application You can use the +Add view to add multiple components or services to your project and use the topology Graph view to group applications and resources within an application group. Prerequisites You have created and deployed minimum two or more components on OpenShift Container Platform using the Developer perspective. Procedure To add a service to the existing application group, press Shift + drag it to the existing application group. Dragging a component and adding it to an application group adds the required labels to the component. Figure 4.4. Application grouping Alternatively, you can also add the component to an application as follows: Click the service pod to see the Overview panel to the right. Click the Actions drop-down menu and select Edit Application Grouping . In the Edit Application Grouping dialog box, click the Application drop-down list, and select an appropriate application group. Click Save to add the service to the application group. You can remove a component from an application group by selecting the component and using Shift + drag to drag it out of the application group. 4.7. Adding services to your application To add a service to your application use the +Add actions using the context menu in the topology Graph view . Note In addition to the context menu, you can add services by using the sidebar or hovering and dragging the dangling arrow from the application group. Procedure Right-click an application group in the topology Graph view to display the context menu. Figure 4.5. Add resource context menu Use Add to Application to select a method for adding a service to the application group, such as From Git , Container Image , From Dockerfile , From Devfile , Upload JAR file , Event Source , Channel , or Broker . Complete the form for the method you choose and click Create . For example, to add a service based on the source code in your Git repository, choose the From Git method, fill in the Import from Git form, and click Create . 4.8. Removing services from your application In the topology Graph view remove a service from your application using the context menu. Procedure Right-click on a service in an application group in the topology Graph view to display the context menu. Select Delete Deployment to delete the service. Figure 4.6. Deleting deployment option 4.9. Labels and annotations used for the Topology view The Topology view uses the following labels and annotations: Icon displayed in the node Icons in the node are defined by looking for matching icons using the app.openshift.io/runtime label, followed by the app.kubernetes.io/name label. This matching is done using a predefined set of icons. Link to the source code editor or the source The app.openshift.io/vcs-uri annotation is used to create links to the source code editor. Node Connector The app.openshift.io/connects-to annotation is used to connect the nodes. App grouping The app.kubernetes.io/part-of=<appname> label is used to group the applications, services, and components. For detailed information on the labels and annotations OpenShift Container Platform applications must use, see Guidelines for labels and annotations for OpenShift applications . 4.10. Additional resources See Importing a codebase from Git to create an application for more information on creating an application from Git. See Connecting an application to a service using the Developer perspective . See Exporting applications
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/building_applications/odc-viewing-application-composition-using-topology-view
Chapter 2. Known and fixed issues
Chapter 2. Known and fixed issues Learn about known issues for Data Grid Operator and find out which issues are fixed. 2.1. Known issues with Data Grid Operator deployments This release does not include any known issues that affect Data Grid clusters that you manage with Data Grid Operator. For complete details about Data Grid, see the Data Grid 8.4 release notes . 2.2. Fixed in Data Grid Operator 8.4.14 Data Grid Operator 8.4.14 includes the following notable fixes: JDG-6904 Unreadable log timestamps JDG-6451 Updates to ADDITIONAL_VARS are not applied to existing clusters JDG-6760 Backup CR does not work when xsite replication is enabled JDG-6860 [Operator] Indexed HotRod Rolling Upgrades fail when cache name is sorted before ___protobuf_metadata cache 2.3. Fixed in Data Grid Operator 8.4.12 Data Grid Operator 8.4.12 includes the following notable fixes: JDG-6544 Changing the service type from Cache to DataGrid does not result in automatic reconciliation JDG-6574 Setting up a custom NodePort value for cross-site replication has no effect JDG-6573 Deadlock occurs when creating caches with a zero-capacity node and a single stateful node JDG-6425 Updates to podTargetLabel are not reflected in pods 2.4. Fixed in Data Grid Operator 8.4.11 Data Grid Operator 8.4.11 includes the following notable fixes: JDG-6584 Infinispan CR should fail if Encryption keystore/certs missing in Secret 2.5. Fixed in Data Grid Operator 8.4.9 Data Grid Operator 8.4.9 includes the following notable fixes: JDG-6549 Data Grid Server HEAD requests failing with End Of File (EOF) 2.6. Fixed in Data Grid Operator 8.4.8 Data Grid Operator 8.4.8 includes the following notable fixes: JDG-6412 Data Grid Operator crashes when Cache CR is missing the template definition JDG-6373 FileAlreadyExistsException during dependency extraction after container restart JDG-6304 Data Grid webhook allows incompatible TLS configuration 2.7. Fixed in Data Grid Operator 8.4.6 Data Grid Operator 8.4.6 includes the following notable fixes: JDG-6128 Data Grid Operator logs error message with stacktrace multiple times after cluster restart JDG-6127 Data Grid Operator repeatedly logs error messages indicating missing secrets while waiting for OpenShift to create a secret JDG-6207 spec.Image field is overwritten by Operand image for CVE releases JDG-6204 status.Operand.Image field not updated when defining spec.Image JDG-6107 Creating caches with authorization fails and produces output issues JDG-6055 Changing memory or cpu values for spec.configListener has no effect on the ConfigListener deployment JDG-5835 Scaling cluster down and up with purge-on-startup=false with one or more file-stores might result in stale entries 2.8. Fixed in Data Grid Operator 8.4.5 Data Grid Operator 8.4.5 includes the following notable fixes: JDG-6063 ConfigListener ignores the Cache CR metadata name when discovering existing Cache CRs JDG-5986 DNS discovery fails when pods are not marked as ready JDG-5935 Outdated names for metering labels JDG-5936 and JDG-5931 Metering labels are not updated after Data Grid Operator upgrade JDG-5939 Removal of the GracefulShutdownTask from Data Grid Operator 2.9. Fixed in Data Grid Operator 8.4.2 Data Grid Operator 8.4.2 includes the following notable fixes: JDG-5623 Gossip router fails to start with TLS configured on FIPS enabled OpenShift JDG-5681 Incorrect Cache CR status when inmutable fields are modified JDG-5577 Cache CR created by listener is not removed upon disabling the listener JDG-5756 Users cannot configure LDAP because the authentication mechanisms of the default endpoint cannot be modified JDG-5789 Infinispan.status.podStatus field shows incorrect pod names for existing deployment topology JDG-5791 Data Grid Operator modifies the content of spec.template in the Cache CR after its creation JDG-5794 ConfigListener log level cannot be modified JDG-5818 OpenShift rolling upgrades with RollingMigration strategy result in Infinispan CR status as Pending JDG-5820 Data Grid Operator fails when upgrading Data Grid Server from 8.3.1-1 to 8.4.0-x using the Hot Rod rolling migration strategy JDG-5836 ConfigListener stale CR check only compares resource names 2.10. Fixed in Data Grid Operator 8.4.0 Data Grid Operator 8.4.0 includes the following notable fixes: JDG-5680 Default Anti-affinity strategy configuration with the Data Grid Operator is not valid JDG-5650 configListener breaks with non-yaml Cache CRs template and large strings JDG-5461 Server image doesn not enable Garbage collection (GC) logging by default JDG-5459 Zero controller execute can hang indefinitely if Zero Pod is not immediately ready
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_operator_8.4_release_notes/rhdg-operator-issues
Chapter 12. Diverting messages and splitting message flows
Chapter 12. Diverting messages and splitting message flows In AMQ Broker, you can configure objects called diverts that enable you to transparently divert messages from one address to another address, without changing any client application logic. You can also configure a divert to forward a copy of a message to a specified forwarding address, effectively splitting the message flow. 12.1. How message diverts work Diverts enable you to transparently divert messages routed to one address to some other address, without changing any client application logic. Think of the set of diverts on a broker server as a type of routing table for messages. A divert can be exclusive , meaning that a message is diverted to a specified forwarding address without going to its original address. A divert can also be non-exclusive , meaning that a message continues to go to its original address, while the broker sends a copy of the message to a specified forwarding address. Therefore, you can use non-exclusive diverts for splitting message flows. For example, you might split a message flow if you want to separately monitor every order sent to an order queue. When an address has both exclusive and non-exclusive diverts configured, the broker processes the exclusive diverts first. If a particular message has already been diverted by an exclusive divert, the broker does not process any non-exclusive diverts for that message. In this case, the message never goes to the original address. When a broker diverts a message, the broker assigns a new message ID and sets the message address to the new forwarding address. You can retrieve the original message ID and address values via the _AMQ_ORIG_ADDRESS (string type) and _AMQ_ORIG_MESSAGE_ID (long type) message properties. If you are using the Core API, use the Message.HDR_ORIGINAL_ADDRESS and Message.HDR_ORIG_MESSAGE_ID properties. Note You can divert a message only to an address on the same broker server. If you want to divert to an address on a different server, a common solution is to first divert the message to a local store-and-forward queue. Then, set up a bridge that consumes from that queue and forwards messages to an address on a different broker. Combining diverts with bridges enables you to create a distributed network of routing connections between geographically distributed broker servers. In this way, you can create a global messaging mesh. 12.2. Configuring message diverts To configure a divert in your broker instance, add a divert element within the core element of your broker.xml configuration file. <core> ... <divert name= > <address> </address> <forwarding-address> </forwarding-address> <filter string= > <routing-type> </routing-type> <exclusive> </exclusive> </divert> ... </core> divert Named instance of a divert. You can add multiple divert elements to your broker.xml configuration file, as long as each divert has a unique name. address Address from which to divert messages forwarding-address Address to which to forward messages filter Optional message filter. If you configure a filter, only messages that match the filter string are diverted. If you do not specify a filter, all messages are considered a match by the divert. routing-type Routing type of the diverted message. You can configure the divert to: Apply the anycast or multicast routing type to a message Strip (that is, remove) the existing routing type Pass through (that is, preserve) the existing routing type Control of the routing type is useful in situations where the message has its routing type already set, but you want to divert the message to an address that uses a different routing type. For example, the broker cannot route a message with the anycast routing type to a queue that uses multicast unless you set the routing-type parameter of the divert to MULTICAST . Valid values for the routing-type parameter of a divert are ANYCAST , MULTICAST , PASS , and STRIP . The default value is STRIP . exclusive Specify whether the divert is exclusive (set the property to true ) or non- exclusive (set the property to false ). The following subsections show configuration examples for exclusive and non-exclusive diverts. 12.2.1. Exclusive divert example Shown below is an example configuration for an exclusive divert. An exclusive divert diverts all matching messages from the originally-configured address to a new address. Matching messages do not get routed to the original address. <divert name="prices-divert"> <address>priceUpdates</address> <forwarding-address>priceForwarding</forwarding-address> <filter string="office='New York'"/> <exclusive>true</exclusive> </divert> In the preceding example, you define a divert called prices-divert that diverts any messages sent to the address priceUpdates to another local address, priceForwarding . You also specify a message filter string. Only messages with the message property office and the value New York are diverted. All other messages are routed to their original address. Finally, you specify that the divert is exclusive. 12.2.2. Non-exclusive divert example Shown below is an example configuration for a non-exclusive divert. In a non-exclusive divert, a message continues to go to its original address, while the broker also sends a copy of the message to a specified forwarding address. Therefore, a non-exclusive divert is a way to split a message flow. <divert name="order-divert"> <address>orders</address> <forwarding-address>spyTopic</forwarding-address> <exclusive>false</exclusive> </divert> In the preceding example, you define a divert called order-divert that takes a copy of every message sent to the address orders and sends it to a local address called spyTopic . You also specify that the divert is non-exclusive. Additional resources For a detailed example that uses both exclusive and non-exclusive diverts, and a bridge to forward messages to another broker, see Divert Example (external).
[ "<core> <divert name= > <address> </address> <forwarding-address> </forwarding-address> <filter string= > <routing-type> </routing-type> <exclusive> </exclusive> </divert> </core>", "<divert name=\"prices-divert\"> <address>priceUpdates</address> <forwarding-address>priceForwarding</forwarding-address> <filter string=\"office='New York'\"/> <exclusive>true</exclusive> </divert>", "<divert name=\"order-divert\"> <address>orders</address> <forwarding-address>spyTopic</forwarding-address> <exclusive>false</exclusive> </divert>" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/configuring_amq_broker/diverting-messages-configuring
Chapter 22. Red Hat Enterprise Linux Atomic Host 7.5.1
Chapter 22. Red Hat Enterprise Linux Atomic Host 7.5.1 22.1. Atomic Host OStree update : New Tree Version: 7.5.1 (hash: c0211e0b703930dd0f0df8b9f5e731901fce8e15e00b3bc76d3cf00df44eb6e8) Changes since Tree Version 7.5.0 (hash: 5df677dcfef08a87dd0ace55790e184a35716cf11260239216bfeba2eb7c60b0) Updated packages : cockpit-ostree-165-3.el7 22.2. Extras Updated packages : docker-1.13.1-63.git94f4240.el7 buildah-0.16.0-2.git6f7d05b.el7 skopeo-0.1.29-3.dev.git7add6fc.el7 atomic-1.22.1-3.git2fd0860.el7 docker-distribution-2.6.2-2.git48294d9.el7 cockpit-165-3.el7 etcd-3.2.18-1.el7 runc-1.0.0-27.rc5.dev.git4bb1fe4.el7 The asterisk (*) marks packages which are available for Red Hat Enterprise Linux only. New packages : podman-0.4.1-4.gitb51d327.el7 22.2.1. Container Images Updated : Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux 7.5 Container Image (rhel7.5, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux Atomic Image (rhel-atomic, rhel7-atomic, rhel7/rhel-atomic) Red Hat Enterprise Linux Atomic Net-SNMP Container Image (rhel7/net-snmp) Red Hat Enterprise Linux Atomic OpenSCAP Container Image (rhel7/openscap) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) Red Hat Enterprise Linux Atomic Support Tools Container Image (rhel7/support-tools) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_5_1
Chapter 13. FlowCollector API reference
Chapter 13. FlowCollector API reference FlowCollector is the Schema for the network flows collection API, which pilots and configures the underlying deployments. 13.1. FlowCollector API specifications Description FlowCollector is the schema for the network flows collection API, which pilots and configures the underlying deployments. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and might reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers might infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Defines the desired state of the FlowCollector resource. *: the mention of "unsupported" or "deprecated" for a feature throughout this document means that this feature is not officially supported by Red Hat. It might have been, for example, contributed by the community and accepted without a formal agreement for maintenance. The product maintainers might provide some support for these features as a best effort only. 13.1.1. .metadata Description Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata Type object 13.1.2. .spec Description Defines the desired state of the FlowCollector resource. *: the mention of "unsupported" or "deprecated" for a feature throughout this document means that this feature is not officially supported by Red Hat. It might have been, for example, contributed by the community and accepted without a formal agreement for maintenance. The product maintainers might provide some support for these features as a best effort only. Type object Property Type Description agent object Agent configuration for flows extraction. consolePlugin object consolePlugin defines the settings related to the OpenShift Container Platform Console plugin, when available. deploymentModel string deploymentModel defines the desired type of deployment for flow processing. Possible values are: - Direct (default) to make the flow processor listen directly from the agents. - Kafka to make flows sent to a Kafka pipeline before consumption by the processor. Kafka can provide better scalability, resiliency, and high availability (for more details, see https://www.redhat.com/en/topics/integration/what-is-apache-kafka ). exporters array exporters defines additional optional exporters for custom consumption or storage. kafka object Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Available when the spec.deploymentModel is Kafka . loki object loki , the flow store, client settings. namespace string Namespace where Network Observability pods are deployed. networkPolicy object networkPolicy defines ingress network policy settings for Network Observability components isolation. processor object processor defines the settings of the component that receives the flows from the agent, enriches them, generates metrics, and forwards them to the Loki persistence layer and/or any available exporter. prometheus object prometheus defines Prometheus settings, such as querier configuration used to fetch metrics from the Console plugin. 13.1.3. .spec.agent Description Agent configuration for flows extraction. Type object Property Type Description ebpf object ebpf describes the settings related to the eBPF-based flow reporter when spec.agent.type is set to eBPF . type string type [deprecated *] selects the flows tracing agent. Previously, this field allowed to select between eBPF or IPFIX . Only eBPF is allowed now, so this field is deprecated and is planned for removal in a future version of the API. 13.1.4. .spec.agent.ebpf Description ebpf describes the settings related to the eBPF-based flow reporter when spec.agent.type is set to eBPF . Type object Property Type Description advanced object advanced allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed mostly for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Set these values at your own risk. cacheActiveTimeout string cacheActiveTimeout is the max period during which the reporter aggregates flows before sending. Increasing cacheMaxFlows and cacheActiveTimeout can decrease the network traffic overhead and the CPU load, however you can expect higher memory consumption and an increased latency in the flow collection. cacheMaxFlows integer cacheMaxFlows is the max number of flows in an aggregate; when reached, the reporter sends the flows. Increasing cacheMaxFlows and cacheActiveTimeout can decrease the network traffic overhead and the CPU load, however you can expect higher memory consumption and an increased latency in the flow collection. excludeInterfaces array (string) excludeInterfaces contains the interface names that are excluded from flow tracing. An entry enclosed by slashes, such as /br-/ , is matched as a regular expression. Otherwise it is matched as a case-sensitive string. features array (string) List of additional features to enable. They are all disabled by default. Enabling additional features might have performance impacts. Possible values are: - PacketDrop : Enable the packets drop flows logging feature. This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged. If the spec.agent.ebpf.privileged parameter is not set, an error is reported. - DNSTracking : Enable the DNS tracking feature. - FlowRTT : Enable flow latency (sRTT) extraction in the eBPF agent from TCP traffic. - NetworkEvents : Enable the network events monitoring feature, such as correlating flows and network policies. This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged. It requires using the OVN-Kubernetes network plugin with the Observability feature. IMPORTANT: This feature is available as a Technology Preview. - PacketTranslation : Enable enriching flows with packet translation information, such as Service NAT. - EbpfManager : Unsupported * . Use eBPF Manager to manage Network Observability eBPF programs. Pre-requisite: the eBPF Manager operator (or upstream bpfman operator) must be installed. - UDNMapping : Unsupported *. Enable interfaces mapping to User Defined Networks (UDN). This feature requires mounting the kernel debug filesystem, so the eBPF agent pods must run as privileged. It requires using the OVN-Kubernetes network plugin with the Observability feature. flowFilter object flowFilter defines the eBPF agent configuration regarding flow filtering. imagePullPolicy string imagePullPolicy is the Kubernetes pull policy for the image defined above interfaces array (string) interfaces contains the interface names from where flows are collected. If empty, the agent fetches all the interfaces in the system, excepting the ones listed in excludeInterfaces . An entry enclosed by slashes, such as /br-/ , is matched as a regular expression. Otherwise it is matched as a case-sensitive string. kafkaBatchSize integer kafkaBatchSize limits the maximum size of a request in bytes before being sent to a partition. Ignored when not using Kafka. Default: 1MB. logLevel string logLevel defines the log level for the Network Observability eBPF Agent metrics object metrics defines the eBPF agent configuration regarding metrics. privileged boolean Privileged mode for the eBPF Agent container. When ignored or set to false , the operator sets granular capabilities (BPF, PERFMON, NET_ADMIN, SYS_RESOURCE) to the container. If for some reason these capabilities cannot be set, such as if an old kernel version not knowing CAP_BPF is in use, then you can turn on this mode for more global privileges. Some agent features require the privileged mode, such as packet drops tracking (see features ) and SR-IOV support. resources object resources are the compute resources required by this container. For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ sampling integer Sampling rate of the flow reporter. 100 means one flow on 100 is sent. 0 or 1 means all flows are sampled. 13.1.5. .spec.agent.ebpf.advanced Description advanced allows setting some aspects of the internal configuration of the eBPF agent. This section is aimed mostly for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Set these values at your own risk. Type object Property Type Description env object (string) env allows passing custom environment variables to underlying components. Useful for passing some very concrete performance-tuning options, such as GOGC and GOMAXPROCS , that should not be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug or support scenarios. scheduling object scheduling controls how the pods are scheduled on nodes. 13.1.6. .spec.agent.ebpf.advanced.scheduling Description scheduling controls how the pods are scheduled on nodes. Type object Property Type Description affinity object If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . nodeSelector object (string) nodeSelector allows scheduling of pods only onto nodes that have each of the specified labels. For documentation, refer to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ . priorityClassName string If specified, indicates the pod's priority. For documentation, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#how-to-use-priority-and-preemption . If not specified, default priority is used, or zero if there is no default. tolerations array tolerations is a list of tolerations that allow the pod to schedule onto nodes with matching taints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . 13.1.7. .spec.agent.ebpf.advanced.scheduling.affinity Description If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . Type object 13.1.8. .spec.agent.ebpf.advanced.scheduling.tolerations Description tolerations is a list of tolerations that allow the pod to schedule onto nodes with matching taints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . Type array 13.1.9. .spec.agent.ebpf.flowFilter Description flowFilter defines the eBPF agent configuration regarding flow filtering. Type object Property Type Description action string action defines the action to perform on the flows that match the filter. The available options are Accept , which is the default, and Reject . cidr string cidr defines the IP CIDR to filter flows by. Examples: 10.10.10.0/24 or 100:100:100:100::/64 destPorts integer-or-string destPorts optionally defines the destination ports to filter flows by. To filter a single port, set a single port as an integer value. For example, destPorts: 80 . To filter a range of ports, use a "start-end" range in string format. For example, destPorts: "80-100" . To filter two ports, use a "port1,port2" in string format. For example, ports: "80,100" . direction string direction optionally defines a direction to filter flows by. The available options are Ingress and Egress . enable boolean Set enable to true to enable the eBPF flow filtering feature. icmpCode integer icmpCode , for Internet Control Message Protocol (ICMP) traffic, optionally defines the ICMP code to filter flows by. icmpType integer icmpType , for ICMP traffic, optionally defines the ICMP type to filter flows by. peerCIDR string peerCIDR defines the Peer IP CIDR to filter flows by. Examples: 10.10.10.0/24 or 100:100:100:100::/64 peerIP string peerIP optionally defines the remote IP address to filter flows by. Example: 10.10.10.10 . pktDrops boolean pktDrops optionally filters only flows containing packet drops. ports integer-or-string ports optionally defines the ports to filter flows by. It is used both for source and destination ports. To filter a single port, set a single port as an integer value. For example, ports: 80 . To filter a range of ports, use a "start-end" range in string format. For example, ports: "80-100" . To filter two ports, use a "port1,port2" in string format. For example, ports: "80,100" . protocol string protocol optionally defines a protocol to filter flows by. The available options are TCP , UDP , ICMP , ICMPv6 , and SCTP . rules array rules defines a list of filtering rules on the eBPF Agents. When filtering is enabled, by default, flows that don't match any rule are rejected. To change the default, you can define a rule that accepts everything: { action: "Accept", cidr: "0.0.0.0/0" } , and then refine with rejecting rules. Unsupported *. sampling integer sampling sampling rate for the matched flows, overriding the global sampling defined at spec.agent.ebpf.sampling . sourcePorts integer-or-string sourcePorts optionally defines the source ports to filter flows by. To filter a single port, set a single port as an integer value. For example, sourcePorts: 80 . To filter a range of ports, use a "start-end" range in string format. For example, sourcePorts: "80-100" . To filter two ports, use a "port1,port2" in string format. For example, ports: "80,100" . tcpFlags string tcpFlags optionally defines TCP flags to filter flows by. In addition to the standard flags (RFC-9293), you can also filter by one of the three following combinations: SYN-ACK , FIN-ACK , and RST-ACK . 13.1.10. .spec.agent.ebpf.flowFilter.rules Description rules defines a list of filtering rules on the eBPF Agents. When filtering is enabled, by default, flows that don't match any rule are rejected. To change the default, you can define a rule that accepts everything: { action: "Accept", cidr: "0.0.0.0/0" } , and then refine with rejecting rules. Unsupported *. Type array 13.1.11. .spec.agent.ebpf.flowFilter.rules[] Description EBPFFlowFilterRule defines the desired eBPF agent configuration regarding flow filtering rule. Type object Property Type Description action string action defines the action to perform on the flows that match the filter. The available options are Accept , which is the default, and Reject . cidr string cidr defines the IP CIDR to filter flows by. Examples: 10.10.10.0/24 or 100:100:100:100::/64 destPorts integer-or-string destPorts optionally defines the destination ports to filter flows by. To filter a single port, set a single port as an integer value. For example, destPorts: 80 . To filter a range of ports, use a "start-end" range in string format. For example, destPorts: "80-100" . To filter two ports, use a "port1,port2" in string format. For example, ports: "80,100" . direction string direction optionally defines a direction to filter flows by. The available options are Ingress and Egress . icmpCode integer icmpCode , for Internet Control Message Protocol (ICMP) traffic, optionally defines the ICMP code to filter flows by. icmpType integer icmpType , for ICMP traffic, optionally defines the ICMP type to filter flows by. peerCIDR string peerCIDR defines the Peer IP CIDR to filter flows by. Examples: 10.10.10.0/24 or 100:100:100:100::/64 peerIP string peerIP optionally defines the remote IP address to filter flows by. Example: 10.10.10.10 . pktDrops boolean pktDrops optionally filters only flows containing packet drops. ports integer-or-string ports optionally defines the ports to filter flows by. It is used both for source and destination ports. To filter a single port, set a single port as an integer value. For example, ports: 80 . To filter a range of ports, use a "start-end" range in string format. For example, ports: "80-100" . To filter two ports, use a "port1,port2" in string format. For example, ports: "80,100" . protocol string protocol optionally defines a protocol to filter flows by. The available options are TCP , UDP , ICMP , ICMPv6 , and SCTP . sampling integer sampling sampling rate for the matched flows, overriding the global sampling defined at spec.agent.ebpf.sampling . sourcePorts integer-or-string sourcePorts optionally defines the source ports to filter flows by. To filter a single port, set a single port as an integer value. For example, sourcePorts: 80 . To filter a range of ports, use a "start-end" range in string format. For example, sourcePorts: "80-100" . To filter two ports, use a "port1,port2" in string format. For example, ports: "80,100" . tcpFlags string tcpFlags optionally defines TCP flags to filter flows by. In addition to the standard flags (RFC-9293), you can also filter by one of the three following combinations: SYN-ACK , FIN-ACK , and RST-ACK . 13.1.12. .spec.agent.ebpf.metrics Description metrics defines the eBPF agent configuration regarding metrics. Type object Property Type Description disableAlerts array (string) disableAlerts is a list of alerts that should be disabled. Possible values are: NetObservDroppedFlows , which is triggered when the eBPF agent is missing packets or flows, such as when the BPF hashmap is busy or full, or the capacity limiter is being triggered. enable boolean Set enable to false to disable eBPF agent metrics collection. It is enabled by default. server object Metrics server endpoint configuration for the Prometheus scraper. 13.1.13. .spec.agent.ebpf.metrics.server Description Metrics server endpoint configuration for the Prometheus scraper. Type object Property Type Description port integer The metrics server HTTP port. tls object TLS configuration. 13.1.14. .spec.agent.ebpf.metrics.server.tls Description TLS configuration. Type object Required type Property Type Description insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the provided certificate. If set to true , the providedCaFile field is ignored. provided object TLS configuration when type is set to Provided . providedCaFile object Reference to the CA file when type is set to Provided . type string Select the type of TLS configuration: - Disabled (default) to not configure TLS for the endpoint. - Provided to manually provide cert file and a key file. Unsupported *. - Auto to use OpenShift Container Platform auto generated certificate using annotations. 13.1.15. .spec.agent.ebpf.metrics.server.tls.provided Description TLS configuration when type is set to Provided . Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.16. .spec.agent.ebpf.metrics.server.tls.providedCaFile Description Reference to the CA file when type is set to Provided . Type object Property Type Description file string File name within the config map or secret. name string Name of the config map or secret containing the file. namespace string Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the file reference: configmap or secret . 13.1.17. .spec.agent.ebpf.resources Description resources are the compute resources required by this container. For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 13.1.18. .spec.consolePlugin Description consolePlugin defines the settings related to the OpenShift Container Platform Console plugin, when available. Type object Property Type Description advanced object advanced allows setting some aspects of the internal configuration of the console plugin. This section is aimed mostly for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Set these values at your own risk. autoscaler object autoscaler spec of a horizontal pod autoscaler to set up for the plugin Deployment. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2). enable boolean Enables the console plugin deployment. imagePullPolicy string imagePullPolicy is the Kubernetes pull policy for the image defined above logLevel string logLevel for the console plugin backend portNaming object portNaming defines the configuration of the port-to-service name translation quickFilters array quickFilters configures quick filter presets for the Console plugin replicas integer replicas defines the number of replicas (pods) to start. resources object resources , in terms of compute resources, required by this container. For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 13.1.19. .spec.consolePlugin.advanced Description advanced allows setting some aspects of the internal configuration of the console plugin. This section is aimed mostly for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Set these values at your own risk. Type object Property Type Description args array (string) args allows passing custom arguments to underlying components. Useful for overriding some parameters, such as a URL or a configuration path, that should not be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug or support scenarios. env object (string) env allows passing custom environment variables to underlying components. Useful for passing some very concrete performance-tuning options, such as GOGC and GOMAXPROCS , that should not be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug or support scenarios. port integer port is the plugin service port. Do not use 9002, which is reserved for metrics. register boolean register allows, when set to true , to automatically register the provided console plugin with the OpenShift Container Platform Console operator. When set to false , you can still register it manually by editing console.operator.openshift.io/cluster with the following command: oc patch console.operator.openshift.io cluster --type='json' -p '[{"op": "add", "path": "/spec/plugins/-", "value": "netobserv-plugin"}]' scheduling object scheduling controls how the pods are scheduled on nodes. 13.1.20. .spec.consolePlugin.advanced.scheduling Description scheduling controls how the pods are scheduled on nodes. Type object Property Type Description affinity object If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . nodeSelector object (string) nodeSelector allows scheduling of pods only onto nodes that have each of the specified labels. For documentation, refer to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ . priorityClassName string If specified, indicates the pod's priority. For documentation, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#how-to-use-priority-and-preemption . If not specified, default priority is used, or zero if there is no default. tolerations array tolerations is a list of tolerations that allow the pod to schedule onto nodes with matching taints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . 13.1.21. .spec.consolePlugin.advanced.scheduling.affinity Description If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . Type object 13.1.22. .spec.consolePlugin.advanced.scheduling.tolerations Description tolerations is a list of tolerations that allow the pod to schedule onto nodes with matching taints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . Type array 13.1.23. .spec.consolePlugin.autoscaler Description autoscaler spec of a horizontal pod autoscaler to set up for the plugin Deployment. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2). Type object 13.1.24. .spec.consolePlugin.portNaming Description portNaming defines the configuration of the port-to-service name translation Type object Property Type Description enable boolean Enable the console plugin port-to-service name translation portNames object (string) portNames defines additional port names to use in the console, for example, portNames: {"3100": "loki"} . 13.1.25. .spec.consolePlugin.quickFilters Description quickFilters configures quick filter presets for the Console plugin Type array 13.1.26. .spec.consolePlugin.quickFilters[] Description QuickFilter defines preset configuration for Console's quick filters Type object Required filter name Property Type Description default boolean default defines whether this filter should be active by default or not filter object (string) filter is a set of keys and values to be set when this filter is selected. Each key can relate to a list of values using a coma-separated string, for example, filter: {"src_namespace": "namespace1,namespace2"} . name string Name of the filter, that is displayed in the Console 13.1.27. .spec.consolePlugin.resources Description resources , in terms of compute resources, required by this container. For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 13.1.28. .spec.exporters Description exporters defines additional optional exporters for custom consumption or storage. Type array 13.1.29. .spec.exporters[] Description FlowCollectorExporter defines an additional exporter to send enriched flows to. Type object Required type Property Type Description ipfix object IPFIX configuration, such as the IP address and port to send enriched IPFIX flows to. kafka object Kafka configuration, such as the address and topic, to send enriched flows to. openTelemetry object OpenTelemetry configuration, such as the IP address and port to send enriched logs or metrics to. type string type selects the type of exporters. The available options are Kafka , IPFIX , and OpenTelemetry . 13.1.30. .spec.exporters[].ipfix Description IPFIX configuration, such as the IP address and port to send enriched IPFIX flows to. Type object Required targetHost targetPort Property Type Description targetHost string Address of the IPFIX external receiver. targetPort integer Port for the IPFIX external receiver. transport string Transport protocol ( TCP or UDP ) to be used for the IPFIX connection, defaults to TCP . 13.1.31. .spec.exporters[].kafka Description Kafka configuration, such as the address and topic, to send enriched flows to. Type object Required address topic Property Type Description address string Address of the Kafka server sasl object SASL authentication configuration. Unsupported *. tls object TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093. topic string Kafka topic to use. It must exist. Network Observability does not create it. 13.1.32. .spec.exporters[].kafka.sasl Description SASL authentication configuration. Unsupported *. Type object Property Type Description clientIDReference object Reference to the secret or config map containing the client ID clientSecretReference object Reference to the secret or config map containing the client secret type string Type of SASL authentication to use, or Disabled if SASL is not used 13.1.33. .spec.exporters[].kafka.sasl.clientIDReference Description Reference to the secret or config map containing the client ID Type object Property Type Description file string File name within the config map or secret. name string Name of the config map or secret containing the file. namespace string Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the file reference: configmap or secret . 13.1.34. .spec.exporters[].kafka.sasl.clientSecretReference Description Reference to the secret or config map containing the client secret Type object Property Type Description file string File name within the config map or secret. name string Name of the config map or secret containing the file. namespace string Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the file reference: configmap or secret . 13.1.35. .spec.exporters[].kafka.tls Description TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.36. .spec.exporters[].kafka.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.37. .spec.exporters[].kafka.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.38. .spec.exporters[].openTelemetry Description OpenTelemetry configuration, such as the IP address and port to send enriched logs or metrics to. Type object Required targetHost targetPort Property Type Description fieldsMapping array Custom fields mapping to an OpenTelemetry conformant format. By default, Network Observability format proposal is used: https://github.com/rhobs/observability-data-model/blob/main/network-observability.md#format-proposal . As there is currently no accepted standard for L3 or L4 enriched network logs, you can freely override it with your own. headers object (string) Headers to add to messages (optional) logs object OpenTelemetry configuration for logs. metrics object OpenTelemetry configuration for metrics. protocol string Protocol of the OpenTelemetry connection. The available options are http and grpc . targetHost string Address of the OpenTelemetry receiver. targetPort integer Port for the OpenTelemetry receiver. tls object TLS client configuration. 13.1.39. .spec.exporters[].openTelemetry.fieldsMapping Description Custom fields mapping to an OpenTelemetry conformant format. By default, Network Observability format proposal is used: https://github.com/rhobs/observability-data-model/blob/main/network-observability.md#format-proposal . As there is currently no accepted standard for L3 or L4 enriched network logs, you can freely override it with your own. Type array 13.1.40. .spec.exporters[].openTelemetry.fieldsMapping[] Description Type object Property Type Description input string multiplier integer output string 13.1.41. .spec.exporters[].openTelemetry.logs Description OpenTelemetry configuration for logs. Type object Property Type Description enable boolean Set enable to true to send logs to an OpenTelemetry receiver. 13.1.42. .spec.exporters[].openTelemetry.metrics Description OpenTelemetry configuration for metrics. Type object Property Type Description enable boolean Set enable to true to send metrics to an OpenTelemetry receiver. pushTimeInterval string Specify how often metrics are sent to a collector. 13.1.43. .spec.exporters[].openTelemetry.tls Description TLS client configuration. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.44. .spec.exporters[].openTelemetry.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.45. .spec.exporters[].openTelemetry.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.46. .spec.kafka Description Kafka configuration, allowing to use Kafka as a broker as part of the flow collection pipeline. Available when the spec.deploymentModel is Kafka . Type object Required address topic Property Type Description address string Address of the Kafka server sasl object SASL authentication configuration. Unsupported *. tls object TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093. topic string Kafka topic to use. It must exist. Network Observability does not create it. 13.1.47. .spec.kafka.sasl Description SASL authentication configuration. Unsupported *. Type object Property Type Description clientIDReference object Reference to the secret or config map containing the client ID clientSecretReference object Reference to the secret or config map containing the client secret type string Type of SASL authentication to use, or Disabled if SASL is not used 13.1.48. .spec.kafka.sasl.clientIDReference Description Reference to the secret or config map containing the client ID Type object Property Type Description file string File name within the config map or secret. name string Name of the config map or secret containing the file. namespace string Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the file reference: configmap or secret . 13.1.49. .spec.kafka.sasl.clientSecretReference Description Reference to the secret or config map containing the client secret Type object Property Type Description file string File name within the config map or secret. name string Name of the config map or secret containing the file. namespace string Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the file reference: configmap or secret . 13.1.50. .spec.kafka.tls Description TLS client configuration. When using TLS, verify that the address matches the Kafka port used for TLS, generally 9093. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.51. .spec.kafka.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.52. .spec.kafka.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.53. .spec.loki Description loki , the flow store, client settings. Type object Required mode Property Type Description advanced object advanced allows setting some aspects of the internal configuration of the Loki clients. This section is aimed mostly for debugging and fine-grained performance optimizations. enable boolean Set enable to true to store flows in Loki. The Console plugin can use either Loki or Prometheus as a data source for metrics (see also spec.prometheus.querier ), or both. Not all queries are transposable from Loki to Prometheus. Hence, if Loki is disabled, some features of the plugin are disabled as well, such as getting per-pod information or viewing raw flows. If both Prometheus and Loki are enabled, Prometheus takes precedence and Loki is used as a fallback for queries that Prometheus cannot handle. If they are both disabled, the Console plugin is not deployed. lokiStack object Loki configuration for LokiStack mode. This is useful for an easy Loki Operator configuration. It is ignored for other modes. manual object Loki configuration for Manual mode. This is the most flexible configuration. It is ignored for other modes. microservices object Loki configuration for Microservices mode. Use this option when Loki is installed using the microservices deployment mode ( https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#microservices-mode ). It is ignored for other modes. mode string mode must be set according to the installation mode of Loki: - Use LokiStack when Loki is managed using the Loki Operator - Use Monolithic when Loki is installed as a monolithic workload - Use Microservices when Loki is installed as microservices, but without Loki Operator - Use Manual if none of the options above match your setup monolithic object Loki configuration for Monolithic mode. Use this option when Loki is installed using the monolithic deployment mode ( https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#monolithic-mode ). It is ignored for other modes. readTimeout string readTimeout is the maximum console plugin loki query total time limit. A timeout of zero means no timeout. writeBatchSize integer writeBatchSize is the maximum batch size (in bytes) of Loki logs to accumulate before sending. writeBatchWait string writeBatchWait is the maximum time to wait before sending a Loki batch. writeTimeout string writeTimeout is the maximum Loki time connection / request limit. A timeout of zero means no timeout. 13.1.54. .spec.loki.advanced Description advanced allows setting some aspects of the internal configuration of the Loki clients. This section is aimed mostly for debugging and fine-grained performance optimizations. Type object Property Type Description staticLabels object (string) staticLabels is a map of common labels to set on each flow in Loki storage. writeMaxBackoff string writeMaxBackoff is the maximum backoff time for Loki client connection between retries. writeMaxRetries integer writeMaxRetries is the maximum number of retries for Loki client connections. writeMinBackoff string writeMinBackoff is the initial backoff time for Loki client connection between retries. 13.1.55. .spec.loki.lokiStack Description Loki configuration for LokiStack mode. This is useful for an easy Loki Operator configuration. It is ignored for other modes. Type object Required name Property Type Description name string Name of an existing LokiStack resource to use. namespace string Namespace where this LokiStack resource is located. If omitted, it is assumed to be the same as spec.namespace . 13.1.56. .spec.loki.manual Description Loki configuration for Manual mode. This is the most flexible configuration. It is ignored for other modes. Type object Property Type Description authToken string authToken describes the way to get a token to authenticate to Loki. - Disabled does not send any token with the request. - Forward forwards the user token for authorization. - Host [deprecated *] - uses the local pod service account to authenticate to Loki. When using the Loki Operator, this must be set to Forward . ingesterUrl string ingesterUrl is the address of an existing Loki ingester service to push the flows to. When using the Loki Operator, set it to the Loki gateway service with the network tenant set in path, for example https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network . querierUrl string querierUrl specifies the address of the Loki querier service. When using the Loki Operator, set it to the Loki gateway service with the network tenant set in path, for example https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network . statusTls object TLS client configuration for Loki status URL. statusUrl string statusUrl specifies the address of the Loki /ready , /metrics and /config endpoints, in case it is different from the Loki querier URL. If empty, the querierUrl value is used. This is useful to show error messages and some context in the frontend. When using the Loki Operator, set it to the Loki HTTP query frontend service, for example https://loki-query-frontend-http.netobserv.svc:3100/ . statusTLS configuration is used when statusUrl is set. tenantID string tenantID is the Loki X-Scope-OrgID that identifies the tenant for each request. When using the Loki Operator, set it to network , which corresponds to a special tenant mode. tls object TLS client configuration for Loki URL. 13.1.57. .spec.loki.manual.statusTls Description TLS client configuration for Loki status URL. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.58. .spec.loki.manual.statusTls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.59. .spec.loki.manual.statusTls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.60. .spec.loki.manual.tls Description TLS client configuration for Loki URL. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.61. .spec.loki.manual.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.62. .spec.loki.manual.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.63. .spec.loki.microservices Description Loki configuration for Microservices mode. Use this option when Loki is installed using the microservices deployment mode ( https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#microservices-mode ). It is ignored for other modes. Type object Property Type Description ingesterUrl string ingesterUrl is the address of an existing Loki ingester service to push the flows to. querierUrl string querierURL specifies the address of the Loki querier service. tenantID string tenantID is the Loki X-Scope-OrgID header that identifies the tenant for each request. tls object TLS client configuration for Loki URL. 13.1.64. .spec.loki.microservices.tls Description TLS client configuration for Loki URL. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.65. .spec.loki.microservices.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.66. .spec.loki.microservices.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.67. .spec.loki.monolithic Description Loki configuration for Monolithic mode. Use this option when Loki is installed using the monolithic deployment mode ( https://grafana.com/docs/loki/latest/fundamentals/architecture/deployment-modes/#monolithic-mode ). It is ignored for other modes. Type object Property Type Description tenantID string tenantID is the Loki X-Scope-OrgID header that identifies the tenant for each request. tls object TLS client configuration for Loki URL. url string url is the unique address of an existing Loki service that points to both the ingester and the querier. 13.1.68. .spec.loki.monolithic.tls Description TLS client configuration for Loki URL. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.69. .spec.loki.monolithic.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.70. .spec.loki.monolithic.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.71. .spec.networkPolicy Description networkPolicy defines ingress network policy settings for Network Observability components isolation. Type object Property Type Description additionalNamespaces array (string) additionalNamespaces contains additional namespaces allowed to connect to the Network Observability namespace. It provides flexibility in the network policy configuration, but if you need a more specific configuration, you can disable it and install your own instead. enable boolean Set enable to true to deploy network policies on the namespaces used by Network Observability (main and privileged). It is disabled by default. These network policies better isolate the Network Observability components to prevent undesired connections to them. To increase the security of connections, enable this option or create your own network policy. 13.1.72. .spec.processor Description processor defines the settings of the component that receives the flows from the agent, enriches them, generates metrics, and forwards them to the Loki persistence layer and/or any available exporter. Type object Property Type Description addZone boolean addZone allows availability zone awareness by labelling flows with their source and destination zones. This feature requires the "topology.kubernetes.io/zone" label to be set on nodes. advanced object advanced allows setting some aspects of the internal configuration of the flow processor. This section is aimed mostly for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Set these values at your own risk. clusterName string clusterName is the name of the cluster to appear in the flows data. This is useful in a multi-cluster context. When using OpenShift Container Platform, leave empty to make it automatically determined. deduper object deduper allows you to sample or drop flows identified as duplicates, in order to save on resource usage. Unsupported *. filters array filters lets you define custom filters to limit the amount of generated flows. These filters provide more flexibility than the eBPF Agent filters (in spec.agent.ebpf.flowFilter ), such as allowing to filter by Kubernetes namespace, but with a lesser improvement in performance. Unsupported *. imagePullPolicy string imagePullPolicy is the Kubernetes pull policy for the image defined above kafkaConsumerAutoscaler object kafkaConsumerAutoscaler is the spec of a horizontal pod autoscaler to set up for flowlogs-pipeline-transformer , which consumes Kafka messages. This setting is ignored when Kafka is disabled. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2). kafkaConsumerBatchSize integer kafkaConsumerBatchSize indicates to the broker the maximum batch size, in bytes, that the consumer accepts. Ignored when not using Kafka. Default: 10MB. kafkaConsumerQueueCapacity integer kafkaConsumerQueueCapacity defines the capacity of the internal message queue used in the Kafka consumer client. Ignored when not using Kafka. kafkaConsumerReplicas integer kafkaConsumerReplicas defines the number of replicas (pods) to start for flowlogs-pipeline-transformer , which consumes Kafka messages. This setting is ignored when Kafka is disabled. logLevel string logLevel of the processor runtime logTypes string logTypes defines the desired record types to generate. Possible values are: - Flows to export regular network flows. This is the default. - Conversations to generate events for started conversations, ended conversations as well as periodic "tick" updates. - EndedConversations to generate only ended conversations events. - All to generate both network flows and all conversations events. It is not recommended due to the impact on resources footprint. metrics object Metrics define the processor configuration regarding metrics multiClusterDeployment boolean Set multiClusterDeployment to true to enable multi clusters feature. This adds clusterName label to flows data resources object resources are the compute resources required by this container. For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ subnetLabels object subnetLabels allows to define custom labels on subnets and IPs or to enable automatic labelling of recognized subnets in OpenShift Container Platform, which is used to identify cluster external traffic. When a subnet matches the source or destination IP of a flow, a corresponding field is added: SrcSubnetLabel or DstSubnetLabel . 13.1.73. .spec.processor.advanced Description advanced allows setting some aspects of the internal configuration of the flow processor. This section is aimed mostly for debugging and fine-grained performance optimizations, such as GOGC and GOMAXPROCS env vars. Set these values at your own risk. Type object Property Type Description conversationEndTimeout string conversationEndTimeout is the time to wait after a network flow is received, to consider the conversation ended. This delay is ignored when a FIN packet is collected for TCP flows (see conversationTerminatingTimeout instead). conversationHeartbeatInterval string conversationHeartbeatInterval is the time to wait between "tick" events of a conversation conversationTerminatingTimeout string conversationTerminatingTimeout is the time to wait from detected FIN flag to end a conversation. Only relevant for TCP flows. dropUnusedFields boolean dropUnusedFields [deprecated *] this setting is not used anymore. enableKubeProbes boolean enableKubeProbes is a flag to enable or disable Kubernetes liveness and readiness probes env object (string) env allows passing custom environment variables to underlying components. Useful for passing some very concrete performance-tuning options, such as GOGC and GOMAXPROCS , that should not be publicly exposed as part of the FlowCollector descriptor, as they are only useful in edge debug or support scenarios. healthPort integer healthPort is a collector HTTP port in the Pod that exposes the health check API port integer Port of the flow collector (host port). By convention, some values are forbidden. It must be greater than 1024 and different from 4500, 4789 and 6081. profilePort integer profilePort allows setting up a Go pprof profiler listening to this port scheduling object scheduling controls how the pods are scheduled on nodes. secondaryNetworks array Defines secondary networks to be checked for resources identification. To guarantee a correct identification, indexed values must form an unique identifier across the cluster. If the same index is used by several resources, those resources might be incorrectly labeled. 13.1.74. .spec.processor.advanced.scheduling Description scheduling controls how the pods are scheduled on nodes. Type object Property Type Description affinity object If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . nodeSelector object (string) nodeSelector allows scheduling of pods only onto nodes that have each of the specified labels. For documentation, refer to https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ . priorityClassName string If specified, indicates the pod's priority. For documentation, refer to https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#how-to-use-priority-and-preemption . If not specified, default priority is used, or zero if there is no default. tolerations array tolerations is a list of tolerations that allow the pod to schedule onto nodes with matching taints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . 13.1.75. .spec.processor.advanced.scheduling.affinity Description If specified, the pod's scheduling constraints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . Type object 13.1.76. .spec.processor.advanced.scheduling.tolerations Description tolerations is a list of tolerations that allow the pod to schedule onto nodes with matching taints. For documentation, refer to https://kubernetes.io/docs/reference/kubernetes-api/workload-resources/pod-v1/#scheduling . Type array 13.1.77. .spec.processor.advanced.secondaryNetworks Description Defines secondary networks to be checked for resources identification. To guarantee a correct identification, indexed values must form an unique identifier across the cluster. If the same index is used by several resources, those resources might be incorrectly labeled. Type array 13.1.78. .spec.processor.advanced.secondaryNetworks[] Description Type object Required index name Property Type Description index array (string) index is a list of fields to use for indexing the pods. They should form a unique Pod identifier across the cluster. Can be any of: MAC , IP , Interface . Fields absent from the 'k8s.v1.cni.cncf.io/network-status' annotation must not be added to the index. name string name should match the network name as visible in the pods annotation 'k8s.v1.cni.cncf.io/network-status'. 13.1.79. .spec.processor.deduper Description deduper allows you to sample or drop flows identified as duplicates, in order to save on resource usage. Unsupported *. Type object Property Type Description mode string Set the Processor de-duplication mode. It comes in addition to the Agent-based deduplication because the Agent cannot de-duplicate same flows reported from different nodes. - Use Drop to drop every flow considered as duplicates, allowing saving more on resource usage but potentially losing some information such as the network interfaces used from peer, or network events. - Use Sample to randomly keep only one flow on 50, which is the default, among the ones considered as duplicates. This is a compromise between dropping every duplicate or keeping every duplicate. This sampling action comes in addition to the Agent-based sampling. If both Agent and Processor sampling values are 50 , the combined sampling is 1:2500. - Use Disabled to turn off Processor-based de-duplication. sampling integer sampling is the sampling rate when deduper mode is Sample . 13.1.80. .spec.processor.filters Description filters lets you define custom filters to limit the amount of generated flows. These filters provide more flexibility than the eBPF Agent filters (in spec.agent.ebpf.flowFilter ), such as allowing to filter by Kubernetes namespace, but with a lesser improvement in performance. Unsupported *. Type array 13.1.81. .spec.processor.filters[] Description FLPFilterSet defines the desired configuration for FLP-based filtering satisfying all conditions. Type object Property Type Description allOf array filters is a list of matches that must be all satisfied in order to remove a flow. outputTarget string If specified, these filters only target a single output: Loki , Metrics or Exporters . By default, all outputs are targeted. sampling integer sampling is an optional sampling rate to apply to this filter. 13.1.82. .spec.processor.filters[].allOf Description filters is a list of matches that must be all satisfied in order to remove a flow. Type array 13.1.83. .spec.processor.filters[].allOf[] Description FLPSingleFilter defines the desired configuration for a single FLP-based filter. Type object Required field matchType Property Type Description field string Name of the field to filter on. Refer to the documentation for the list of available fields: https://github.com/netobserv/network-observability-operator/blob/main/docs/flows-format.adoc . matchType string Type of matching to apply. value string Value to filter on. When matchType is Equal or NotEqual , you can use field injection with USD(SomeField) to refer to any other field of the flow. 13.1.84. .spec.processor.kafkaConsumerAutoscaler Description kafkaConsumerAutoscaler is the spec of a horizontal pod autoscaler to set up for flowlogs-pipeline-transformer , which consumes Kafka messages. This setting is ignored when Kafka is disabled. Refer to HorizontalPodAutoscaler documentation (autoscaling/v2). Type object 13.1.85. .spec.processor.metrics Description Metrics define the processor configuration regarding metrics Type object Property Type Description disableAlerts array (string) disableAlerts is a list of alerts that should be disabled. Possible values are: NetObservNoFlows , which is triggered when no flows are being observed for a certain period. NetObservLokiError , which is triggered when flows are being dropped due to Loki errors. includeList array (string) includeList is a list of metric names to specify which ones to generate. The names correspond to the names in Prometheus without the prefix. For example, namespace_egress_packets_total shows up as netobserv_namespace_egress_packets_total in Prometheus. Note that the more metrics you add, the bigger is the impact on Prometheus workload resources. Metrics enabled by default are: namespace_flows_total , node_ingress_bytes_total , node_egress_bytes_total , workload_ingress_bytes_total , workload_egress_bytes_total , namespace_drop_packets_total (when PacketDrop feature is enabled), namespace_rtt_seconds (when FlowRTT feature is enabled), namespace_dns_latency_seconds (when DNSTracking feature is enabled), namespace_network_policy_events_total (when NetworkEvents feature is enabled). More information, with full list of available metrics: https://github.com/netobserv/network-observability-operator/blob/main/docs/Metrics.md server object Metrics server endpoint configuration for Prometheus scraper 13.1.86. .spec.processor.metrics.server Description Metrics server endpoint configuration for Prometheus scraper Type object Property Type Description port integer The metrics server HTTP port. tls object TLS configuration. 13.1.87. .spec.processor.metrics.server.tls Description TLS configuration. Type object Required type Property Type Description insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the provided certificate. If set to true , the providedCaFile field is ignored. provided object TLS configuration when type is set to Provided . providedCaFile object Reference to the CA file when type is set to Provided . type string Select the type of TLS configuration: - Disabled (default) to not configure TLS for the endpoint. - Provided to manually provide cert file and a key file. Unsupported *. - Auto to use OpenShift Container Platform auto generated certificate using annotations. 13.1.88. .spec.processor.metrics.server.tls.provided Description TLS configuration when type is set to Provided . Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.89. .spec.processor.metrics.server.tls.providedCaFile Description Reference to the CA file when type is set to Provided . Type object Property Type Description file string File name within the config map or secret. name string Name of the config map or secret containing the file. namespace string Namespace of the config map or secret containing the file. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the file reference: configmap or secret . 13.1.90. .spec.processor.resources Description resources are the compute resources required by this container. For more information, see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 13.1.91. .spec.processor.subnetLabels Description subnetLabels allows to define custom labels on subnets and IPs or to enable automatic labelling of recognized subnets in OpenShift Container Platform, which is used to identify cluster external traffic. When a subnet matches the source or destination IP of a flow, a corresponding field is added: SrcSubnetLabel or DstSubnetLabel . Type object Property Type Description customLabels array customLabels allows to customize subnets and IPs labelling, such as to identify cluster-external workloads or web services. If you enable openShiftAutoDetect , customLabels can override the detected subnets in case they overlap. openShiftAutoDetect boolean openShiftAutoDetect allows, when set to true , to detect automatically the machines, pods and services subnets based on the OpenShift Container Platform install configuration and the Cluster Network Operator configuration. Indirectly, this is a way to accurately detect external traffic: flows that are not labeled for those subnets are external to the cluster. Enabled by default on OpenShift Container Platform. 13.1.92. .spec.processor.subnetLabels.customLabels Description customLabels allows to customize subnets and IPs labelling, such as to identify cluster-external workloads or web services. If you enable openShiftAutoDetect , customLabels can override the detected subnets in case they overlap. Type array 13.1.93. .spec.processor.subnetLabels.customLabels[] Description SubnetLabel allows to label subnets and IPs, such as to identify cluster-external workloads or web services. Type object Required cidrs name Property Type Description cidrs array (string) List of CIDRs, such as ["1.2.3.4/32"] . name string Label name, used to flag matching flows. 13.1.94. .spec.prometheus Description prometheus defines Prometheus settings, such as querier configuration used to fetch metrics from the Console plugin. Type object Property Type Description querier object Prometheus querying configuration, such as client settings, used in the Console plugin. 13.1.95. .spec.prometheus.querier Description Prometheus querying configuration, such as client settings, used in the Console plugin. Type object Required mode Property Type Description enable boolean When enable is true , the Console plugin queries flow metrics from Prometheus instead of Loki whenever possible. It is enbaled by default: set it to false to disable this feature. The Console plugin can use either Loki or Prometheus as a data source for metrics (see also spec.loki ), or both. Not all queries are transposable from Loki to Prometheus. Hence, if Loki is disabled, some features of the plugin are disabled as well, such as getting per-pod information or viewing raw flows. If both Prometheus and Loki are enabled, Prometheus takes precedence and Loki is used as a fallback for queries that Prometheus cannot handle. If they are both disabled, the Console plugin is not deployed. manual object Prometheus configuration for Manual mode. mode string mode must be set according to the type of Prometheus installation that stores Network Observability metrics: - Use Auto to try configuring automatically. In OpenShift Container Platform, it uses the Thanos querier from OpenShift Container Platform Cluster Monitoring - Use Manual for a manual setup timeout string timeout is the read timeout for console plugin queries to Prometheus. A timeout of zero means no timeout. 13.1.96. .spec.prometheus.querier.manual Description Prometheus configuration for Manual mode. Type object Property Type Description forwardUserToken boolean Set true to forward logged in user token in queries to Prometheus tls object TLS client configuration for Prometheus URL. url string url is the address of an existing Prometheus service to use for querying metrics. 13.1.97. .spec.prometheus.querier.manual.tls Description TLS client configuration for Prometheus URL. Type object Property Type Description caCert object caCert defines the reference of the certificate for the Certificate Authority. enable boolean Enable TLS insecureSkipVerify boolean insecureSkipVerify allows skipping client-side verification of the server certificate. If set to true , the caCert field is ignored. userCert object userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. 13.1.98. .spec.prometheus.querier.manual.tls.caCert Description caCert defines the reference of the certificate for the Certificate Authority. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret . 13.1.99. .spec.prometheus.querier.manual.tls.userCert Description userCert defines the user certificate reference and is used for mTLS. When you use one-way TLS, you can ignore this property. Type object Property Type Description certFile string certFile defines the path to the certificate file name within the config map or secret. certKey string certKey defines the path to the certificate private key file name within the config map or secret. Omit when the key is not necessary. name string Name of the config map or secret containing certificates. namespace string Namespace of the config map or secret containing certificates. If omitted, the default is to use the same namespace as where Network Observability is deployed. If the namespace is different, the config map or the secret is copied so that it can be mounted as required. type string Type for the certificate reference: configmap or secret .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/network_observability/flowcollector-api
Chapter 2. Accessing the web console
Chapter 2. Accessing the web console The OpenShift Container Platform web console is a user interface accessible from a web browser. Developers can use the web console to visualize, browse, and manage the contents of projects. 2.1. Prerequisites JavaScript must be enabled to use the web console. For the best experience, use a web browser that supports WebSockets . Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster. 2.2. Understanding and accessing the web console The web console runs as a pod on the control plane node. The static assets required to run the web console are served by the pod. After you install OpenShift Container Platform using the openshift-install create cluster command, you can find the web console URL and login credentials for the installed cluster in the CLI output of the installation program. For example: Example output INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided> Use those details to log in and access the web console. For existing clusters that you did not install, you can use oc whoami --show-console to see the web console URL. Important The dir parameter specifies the assets directory, which stores the manifest files, the ISO image, and the auth directory. The auth directory stores the kubeadmin-password and kubeconfig files. As a kubeadmin user, you can use the kubeconfig file to access the cluster with the following setting: export KUBECONFIG=<install_directory>/auth/kubeconfig . The kubeconfig is specific to the generated ISO image, so if the kubeconfig is set and the oc command fails, it is possible that the system did not boot with the generated ISO image. To perform debugging, during the bootstrap process, you can log in to the console as the core user by using the contents of the kubeadmin-password file. Additional resources Enabling feature sets using the web console
[ "INFO Install complete! INFO Run 'export KUBECONFIG=<your working directory>/auth/kubeconfig' to manage the cluster with 'oc', the OpenShift CLI. INFO The cluster is ready when 'oc login -u kubeadmin -p <provided>' succeeds (wait a few minutes). INFO Access the OpenShift web-console here: https://console-openshift-console.apps.demo1.openshift4-beta-abcorp.com INFO Login to the console with user: kubeadmin, password: <provided>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/web_console/web-console
Part III. Advanced Clair configuration
Part III. Advanced Clair configuration Use this section to configure advanced Clair features.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/vulnerability_reporting_with_clair_on_red_hat_quay/advanced-clair-configuration
Appendix A. Red Hat Customer Portal Labs Relevant to Networking
Appendix A. Red Hat Customer Portal Labs Relevant to Networking Red Hat Customer Portal Labs are tools designed to help you improve performance, troubleshoot issues, identify security problems, and optimize configuration. This appendix provides an overview of Red Hat Customer Portal Labs relevant to networking. All Red Hat Customer Portal Labs are available at https://access.redhat.com/labs/ . Bridge Configuration The Bridge Configuration is designed to configure a bridged network interface for applications such as KVM using Red Hat Enterprise Linux 5.4 or later. Network Bonding Helper The Network Bonding Helper allows administrators to bind multiple Network Interface Controllers together into a single channel using the bonding kernel module and the bonding network interface. Use the Network Bonding Helper to enable two or more network interfaces to act as one bonding interface. Packet capture syntax generator The Packet capture syntax generator helps you to capture network packets. Use the Packet capture syntax generator to generate the tcpdump command that selects an interface and then prints information to the console. You need root access to enter the command.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/appe-customer-portal-labs
Part II. Designing a decision service using PMML models
Part II. Designing a decision service using PMML models As a business rules developer, you can use Predictive Model Markup Language (PMML) to define statistical or data-mining models that you can integrate with your decision services in Red Hat Decision Manager. Red Hat Decision Manager includes consumer conformance support of PMML 4.2.1 for Regression, Scorecard, Tree, and Mining models. Red Hat Decision Manager does not include a built-in PMML model editor, but you can use an XML or PMML-specific authoring tool to create PMML models and then integrate them with your Red Hat Decision Manager projects. For more information about PMML, see the DMG PMML specification . Note You can also design your decision service using Decision Model and Notation (DMN) models and include your PMML models as part of your DMN service. For information about DMN support in Red Hat Decision Manager 7.13, see the following resources: Getting started with decision services (step-by-step tutorial with a DMN decision service example) Designing a decision service using DMN models (overview of DMN support and capabilities in Red Hat Decision Manager)
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/assembly-pmml-models
E.2.16. /proc/locks
E.2.16. /proc/locks This file displays the files currently locked by the kernel. The contents of this file contain internal kernel debugging data and can vary tremendously, depending on the use of the system. A sample /proc/locks file for a lightly loaded system looks similar to the following: Each lock has its own line which starts with a unique number. The second column refers to the class of lock used, with FLOCK signifying the older-style UNIX file locks from a flock system call and POSIX representing the newer POSIX locks from the lockf system call. The third column can have two values: ADVISORY or MANDATORY . ADVISORY means that the lock does not prevent other people from accessing the data; it only prevents other attempts to lock it. MANDATORY means that no other access to the data is permitted while the lock is held. The fourth column reveals whether the lock is allowing the holder READ or WRITE access to the file. The fifth column shows the ID of the process holding the lock. The sixth column shows the ID of the file being locked, in the format of MAJOR-DEVICE : MINOR-DEVICE : INODE-NUMBER . The seventh and eighth column shows the start and end of the file's locked region.
[ "1: POSIX ADVISORY WRITE 3568 fd:00:2531452 0 EOF 2: FLOCK ADVISORY WRITE 3517 fd:00:2531448 0 EOF 3: POSIX ADVISORY WRITE 3452 fd:00:2531442 0 EOF 4: POSIX ADVISORY WRITE 3443 fd:00:2531440 0 EOF 5: POSIX ADVISORY WRITE 3326 fd:00:2531430 0 EOF 6: POSIX ADVISORY WRITE 3175 fd:00:2531425 0 EOF 7: POSIX ADVISORY WRITE 3056 fd:00:2548663 0 EOF" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-locks
10.9. Choose an Installation Boot Method
10.9. Choose an Installation Boot Method You can use several methods to boot the Red Hat Enterprise Linux 7 installation program. The method you choose depends upon your installation media. Note Installation media must remain mounted throughout installation, including during execution of the %post section of a kickstart file. Full installation DVD or USB drive You can create bootable media from the full installation DVD ISO image. In this case, a single DVD or USB drive can be used to complete the entire installation - it will serve both as a boot device and as an installation source for installing software packages. See Chapter 3, Making Media for instructions on how to make a full installation DVD or USB drive. Minimal boot CD, DVD or USB Flash Drive A minimal boot CD, DVD or USB flash drive is created using a small ISO image, which only contains data necessary to boot the system and start the installation. If you use this boot media, you will need an additional installation source from which packages will be installed. See Chapter 3, Making Media for instructions on making boot CDs, DVDs and USB flash drives. PXE Server A preboot execution environment (PXE) server allows the installation program to boot over the network. After you boot the system, you complete the installation from a different installation source, such as a local hard drive or a location on a network. For more information on PXE servers, see Chapter 24, Preparing for a Network Installation .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-installation-planning-boot-method-ppc
Chapter 2. Technology overview
Chapter 2. Technology overview We offer a solution for running Red Hat OpenShift Container Platform 4.12 on Red Hat OpenStack Platform 16. Our solution deploys Red Hat OpenShift Container Platform 4.12 to physical servers that run Red Hat OpenStack Platform 16.2. We use Red Hat OpenStack Platform director to perform the initial OpenStack installation and Day 2 operations. 2.1. Relationship between OpenShift and OpenStack The relationship between OpenStack and OpenShift is complementary. OpenStack exposes resources through an application programming interface (API) and OpenShift requests them. OpenStack provides OpenShift with compute, storage, and networking infrastructure, plus additional resources, such as self-service load balancers and encryption. OpenShift runs its containerized applications on the infrastructure provisioned by OpenStack. The products are tightly integrated. OpenShift can consume OpenStack resources on demand and without user intervention. 2.1.1. Red Hat Enterprise Linux CoreOS (RHCOS) Beginning with OpenShift 4, OpenShift nodes now run on Red Hat Enterprise Linux (RHEL) CoreOS (RHCOS). RHEL CoreOS combines the ease of over-the-air updates from Container Linux (formerly known as CoreOS) with the Red Hat Enterprise Linux kernel to deliver a more secure, easily managed container host. In an installer-provisioned infrastructure based deployment, RHCOS is the supported operating system for all the OpenShift Container Platform nodes and is used by default for workers and controllers. It is also an OpenShift requirement that the controller nodes run RHCOS. Currently, RHCOS is only used with OpenShift, it is not provided for use as an independent operating system. For more information see Red Hat Enterprise Linux (RHEL) CoreOS . 2.2. Solution overview Although there are many available options for placing OpenShift on OpenStack, we provide one validated solution to ensure clarity, simplicity, and supportability. The Red Hat Tested Solution represents the components and integrations of this solution, which has been tested by QE and is a starting point for all enterprise deployments. Figure 2.1. Diagram of the Red Hat solution We made these key choices to complete the installation and setup shown in the Diagram of the Red Hat solution : Installation OpenStack is installed using director. OpenStack is installed using external TLS encryption. OpenShift is installed using installer-provisioned infrastructure (IPI). OpenShift is installed from the director host using a non-privileged OpenStack tenant. Storage OpenStack deploys Fileshare-as-a-Service (manila) usable with RWX container workloads. OpenStack deploys the Block Storage service (cinder) usable with RWO container workloads. OpenStack uses local storage for Compute (nova) ephemeral storage. OpenStack uses Red Hat Ceph Storage (RHCS) for Image (glance), Block Storage (cinder), Object (swift), and optionally Compute (nova). OpenStack uses RHCS with Ganesha for Fileshare-as-a-Service (manila). OpenShift uses a Container Storage Interface (CSI) driver to provide access to manila. OpenShift uses Object storage for the internal registry. Compute OpenShift control-plane and worker VMs are deployed using nova availability zones to provide high availability. Networking OpenStack uses Open Virtual Network (OVN) for its SDN. OpenShift networking is managed by OVN-Kubernetes. OpenStack deploys Load-Balancing-as-a-Service (Octavia) for OpenShift load balancing. OpenShift uses the Amphora driver for Octavia to provide load balancing.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/reference_architecture_for_deploying_red_hat_openshift_container_platform_on_red_hat_openstack_platform/technology-overview
Chapter 8. Workflow job templates
Chapter 8. Workflow job templates You can create both Job templates and Workflow job templates from Automation Execution Templates . For Job templates, see Job templates . A workflow job template links together a sequence of disparate resources that tracks the full set of jobs that were part of the release process as a single unit. These resources include the following: Job templates Workflow job templates Project syncs Inventory source syncs The Templates page shows the workflow and job templates that are currently available. The default view is collapsed (Compact), showing the template name, template type, and the statuses of the jobs that have run by using that template. You can click the arrow to each entry to expand and view more information. This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template. From this screen you can launch , edit , and copy a workflow job template. Only workflow templates have the workflow visualizer icon as a shortcut for accessing the workflow editor. Note Workflow templates can be used as building blocks for another workflow template. You can enable Prompt on Launch by setting up several settings in a workflow template, which you can edit at the workflow job template level. These do not affect the values assigned at the individual workflow template level. For further instructions, see the Workflow visualizer section. 8.1. Creating a workflow job template To create a new workflow job template, complete the following steps: Important If you set a limit to a workflow template, it is not passed down to the job template unless you check Prompt on launch for the limit. This can lead to playbook failures if the limit is mandatory for the playbook that you are running. Procedure From the navigation panel, select Automation Execution Templates . On the Templates list view, select Create workflow job template from the Create template list. Enter the appropriate details in the following fields: Note If a field has the Prompt on launch checkbox selected, either launching the workflow template, or using the workflow template within another workflow template, you are prompted for the value for that field. Most prompted values override any values set in the job template. Exceptions are noted in the following table. Field Options Prompt on Launch Name Enter a name for the job. N/A Description Enter an arbitrary description as appropriate (optional). N/A Organization Choose the organization to use with this template from the organizations available to the logged in user. N/A Inventory Optionally, select the inventory to use with this template from the inventories available to the logged in user. Yes Limit A host pattern to further constrain the list of hosts managed or affected by the playbook. You can separate many patterns by colons (:). As with core Ansible: a:b means "in group a or b" a:b:&c means "in a or b but must be in c" a:!b means "in a, and definitely not in b" For more information see, Patterns: targeting hosts and groups in the Ansible documentation. Yes If selected, even if a default value is supplied, you are prompted upon launch to select a limit. Source control branch Select a branch for the workflow. This branch is applied to all workflow job template nodes that prompt for a branch. Yes Labels Optionally, supply labels that describe this workflow job template, such as dev or test . Use labels to group and filter workflow job templates and completed jobs in the display. Labels are created when they are added to the workflow template. Labels are associated to a single Organization using the Project that is provided in the workflow template. Members of the Organization can create labels on a workflow template if they have edit permissions (such as the admin role). Once you save the job template, the labels appear in the workflow job template Details view. Labels are only applied to the workflow templates not the job template nodes that are used in the workflow. Select beside a label to remove it. When a label is removed, it is no longer associated with that particular Job or Job Template, but it remains associated with any other jobs that reference it. Yes If selected, even if a default value is supplied, you are prompted when launching to supply additional labels, if needed. - You cannot delete existing labels, selecting only removes the newly added labels, not existing default labels. Job tags Type and select the Create drop-down to specify which parts of the playbook should run. For more information and examples see Tags in the Ansible documentation. Yes Skip tags Type and select the Create drop-down to specify certain tasks or parts of the playbook to skip. For more information and examples see Tags in the Ansible documentation. Yes Extra variables Pass extra command line variables to the playbook. This is the "-e" or "-extra-vars" command line parameter for ansible-playbook that is documented in the Ansible documentation at Controlling how Ansible behaves: precedence rules . - Give key or value pairs by using either YAML or JSON. These variables have a maximum value of precedence and overrides other variables specified elsewhere. The following is an example value: git_branch: production release_version: 1.5 Yes If you want to be able to specify extra_vars on a schedule, you must select Prompt on launch for Extra variables on the workflow job template, or enable a survey on the job template. Those answered survey questions become extra_vars . For more information about extra variables, see Extra Variables . Specify the following Options for launching this template, if necessary: Check Enable webhook to turn on the ability to interface with a predefined SCM system web service that is used to launch a workflow job template. GitHub and GitLab are the supported SCM systems. If you enable webhooks, other fields display, prompting for additional information: Webhook service : Select which service to listen for webhooks from. Webhook URL : Automatically populated with the URL for the webhook service to POST requests to. Webhook key : Generated shared secret to be used by the webhook service to sign payloads sent to automation controller. You must configure this in the settings on the webhook service so that webhooks from this service are accepted in automation controller. For additional information about setting up webhooks, see Working with Webhooks . Check Enable concurrent jobs to allow simultaneous runs of this workflow. For more information, see Automation controller capacity determination and job impact . When you have completed configuring the workflow template, click Create workflow job template . Saving the template exits the workflow template page and the workflow visualizer opens where you can build a workflow. For more information, see the Workflow visualizer section. Otherwise, select one of these methods: Close the workflow visualizer to return to the Details tab of the newly saved template. There you can complete the following tasks: Review, edit, add permissions, notifications, schedules, and surveys View completed jobs Build a workflow template Click Launch template to start the workflow. Note Save the template before launching, or Launch template remains disabled. The Notifications tab is only present after you save the template. 8.2. Work with permissions Click the Team Access or User Access tab to review, grand, edit, and remove associated permissions for users along with team members. Click Add roles to create new permissions for this workflow template by following the prompts to assign them. 8.3. Work with notifications For information on working with notifications in workflow job templates, see Work with notifications . 8.4. View completed workflow jobs The Jobs tab provides the list of job templates that have run. Click the expand icon to each job to view the details of each job. From this view, you can click the job ID, name of the workflow job and see its graphical representation. The following example shows the job details of a workflow job: The nodes are marked with labels to help you identify them. For more information, see the legend in the Workflow visualizer section. 8.5. Scheduling a workflow job template Select the Schedules tab to access the schedules for a particular workflow job template.. For more information about scheduling a workflow job template run, see the Scheduling job templates section. If a workflow job template used in a nested workflow has a survey, or the Prompt on launch is selected for the inventory option, the PROMPT option displays to the SAVE and CANCEL options on the schedule form. Click PROMPT to show an optional INVENTORY step where you can give or remove an inventory or skip this step without any changes. 8.6. Surveys in workflow job templates Workflows containing job types of Run or Check provide a way to set up surveys in the workflow job template creation or editing screens. For more information on job surveys, including how to create a survey and optional survey questions in workflow job templates, see the Surveys in job templates section. 8.7. Workflow visualizer The Workflow Visualizer provides a graphical way of linking together job templates, workflow templates, project syncs, and inventory syncs to build a workflow template. Before you build a workflow template, see the Workflows in automation controller section for considerations associated with various scenarios on parent, child, and sibling nodes. 8.7.1. Building a workflow You can set up any combination of two or more of the following node types to build a workflow: Template (Job Template or Workflow Job Template) Project Sync Inventory Sync Approval Each node is represented by a rectangle while the relationships and their associated edge types are represented by a line (or link) that connects them. Procedure To launch the workflow visualizer, use one of these methods: From the navigation panel, select Automation Execution Templates . Select a workflow template and click View workflow visualizer . From the Templates list view, click the icon to a workflow job template. Click Add step to display a list of nodes to add to your workflow. From the Node type list, select the type of node that you want to add. If you select an Approval node, see Approval nodes for more information. Selecting a node provides the available valid options associated with it. Note If you select a job template that does not have a default inventory when populating a workflow graph, the inventory of the parent workflow is used. Though a credential is not required in a job template, you cannot select a job template for your workflow if it has a credential that requires a password, unless the credential is replaced by a prompted credential. When you select a node type, the workflow begins to build, and you must specify the type of action to be taken for the selected node. This action is also referred to as edge type. If the node is a root node, the edge type defaults to Always and is non-editable. For subsequent nodes, you can select one of the following scenarios (edge type) to apply to each: Always run : Continue to execute regardless of success or failure. Run on success : After successful completion, execute the template. Run on fail : After failure, execute a different template. Select the behavior of the node if it is a convergent node from the Convergence field: Any is the default behavior, allowing any of the nodes to complete as specified, before triggering the converging node. If the status of one parent meets one of those run conditions, an any child node will run. An any node requires all nodes to complete, but only one node must complete with the expected outcome. Choose All to ensure that all nodes complete as specified, before converging and triggering the node. The purpose of all * nodes is to make sure that every parent meets its expected outcome to run the child node. The workflow checks to make sure every parent behaves as expected to run the child node. Otherwise, it will not run the child node. If selected, the node is labeled as ALL in the graphical view: Note If a node is a root node, or a node that does not have any nodes converging into it, setting the Convergence rule does not apply, as its behavior is dictated by the action that triggers it. If a job template used in the workflow has Prompt on launch selected for any of its parameters, a PROMPT option appears, enabling you to change those values at the node level. Use the wizard to change the values in each of the tabs and click Confirm in the Preview tab. If a workflow template used in the workflow has Prompt on launch selected for the inventory option, use the wizard to supply the inventory at the prompt. If the parent workflow has its own inventory, it overrides any inventory that is supplied here. Note For workflow job templates with required fields that prompt details, but do not have a default, you must give those values when creating a node before the SELECT option is enabled. The following two cases disable the SELECT option until a value is provided by the PROMPT option: When you select the Prompt on launch checkbox in a workflow job template, but do not give a default. When you create a survey question that is required but do not give a default answer. However, this is not the case with credentials. Credentials that require a password on launch are not permitted when creating a workflow node, because everything required to launch the node must be provided when the node is created. If you are prompted for credentials in a workflow job template, it is not possible to select a credential that requires a password in automation controller. You must also click SELECT when the prompt wizard closes, to apply the changes at that node. Otherwise, any changes you make revert back to the values set in the job template. When the node is created, it is labeled with its job type. A template that is associated with each workflow node runs based on the selected run scenario as it proceeds. Click Legend to display the legend for each run scenario and their job types. Hover over a node to edit the node, add step and link, or delete the selected node: When you have added or edited a node, click Finish to save any modifications and render it on the graphical view. For possible ways to build your workflow, see Building nodes scenarios . When you have built your workflow job template, click Save to save your entire workflow template and return to the new workflow job template details page. Important Clicking Close does not save your work, but instead, it closes the entire Workflow Visualizer so that you have to start again. 8.7.2. Approval nodes Choosing an Approval node requires your intervention to advance a workflow. This functions as a means to pause the workflow in between playbooks so that you can give approval to continue on to the playbook in the workflow. This gives the user a specified amount of time to intervene, but also enables you to continue as quickly as possible without having to wait on another trigger. The default for the timeout is none, but you can specify the length of time before the request expires and is automatically denied. After you select and supply the information for the approval node, it displays on the graphical view with a pause icon beside it. The approver is anyone who meets the following criteria: A user that can execute the workflow job template containing the approval nodes. A user who has organization administrator or above privileges (for the organization associated with that workflow job template). A user who has the Approve permission explicitly assigned to them within that specific workflow job template. If pending approval nodes are not approved within the specified time limit (if an expiration was assigned) or they are denied, then they are marked as "timed out" or "failed", and move on to the "on fail node" or "always node". If approved, the "on success" path is taken. If you try to POST in the API to a node that has already been approved, denied or timed out, an error message notifies you that this action is redundant, and no further steps are taken. The following table shows the various levels of permissions allowed on approval workflows: 8.7.3. Building nodes scenarios Learn how to manage nodes in the following scenarios. Procedure Click the ( ) icon on the parent node and Add step and link to add a sibling node: Click Add step or Start ( ) and Add step , to add a root node to depict a split scenario. At any node where you want to create a split scenario, hover over the node from which the split scenario begins and click the plus ( ) icon on the parent node and Add step and link . This adds multiple nodes from the same parent node, creating sibling nodes. Refer to the key by clicking Legend to identify the meaning of the symbols and colors associated with the graphical depiction. Note If you remove a node that has a follow-on node attached to it in a workflow with a set of sibling nodes that has varying edge types, the attached node automatically joins the set of sibling nodes and retains its edge type: 8.7.4. Editing a node Procedure Edit a node by using one of these methods: If you want to edit a node, click the icon of the node. The pane displays the current selections, click Edit to change these. Make your changes and click Finish to apply them to the graphical view. To edit the edge type for an existing link, ( Run on success , Run on fail , Run always ), click ( ) on the existing status. To remove a link, click ( ) for the link and click Remove link . This option only appears in the pane if the target or child node has more than one parent. All nodes must be linked to at least one other node at all times so you must create a new link before removing an old one. Edit the view of the workflow diagram by using one of these methods: Click the examine icon ( ) to zoom in, the reduce icon ( ) to zoom out, the expand icon ( ) to fit to screen or the reset icon ( ) to reposition the view. Drag the workflow diagram to reposition it on the screen or use the scroll on your mouse to zoom. 8.8. Launching a workflow job template Procedure Launch a workflow job template by using one of these methods: From the navigation panel, select Automation Execution Templates and click the icon to the job template. Click Launch template in the Details tab of the workflow job template that you want to launch. Variables added for a workflow job template are automatically added in automation controller when launching, along with any extra variables set in the workflow job template and survey. Events related to approvals on workflows are displayed in the activity stream ( ) with detailed information about the approval requests, if any. 8.9. Copying a workflow job template With automation controller you can copy a workflow job template. When you copy a workflow job template, it does not copy any associated schedule, notifications, or permissions. Schedules and notifications must be recreated by the user or system administrator creating the copy of the workflow template. The user copying the workflow template is granted the administrator permission, but no permissions are assigned (copied) to the workflow template. Procedure Open the workflow job template that you want to copy by using one of these methods: From the navigation panel, select Automation Execution Templates . In the workflow job template Details view, click to the desired template. Click the copy ( ) icon. The new template with the name of the template from which you copied and a timestamp displays in the list of templates. Select the copied template and click Edit template . Replace the contents of the Name field with a new name, and give or change the entries in the other fields to complete this template. Click Save job template . Note If a resource has a related resource that you do not have the right level of permission to, you cannot copy the resource. For example, in the case where a project uses a credential that a current user only has Read access. However, for a workflow job template, if any of its nodes use an unauthorized job template, inventory, or credential, the workflow template can still be copied. But in the copied workflow job template, the corresponding fields in the workflow template node are absent. 8.10. Workflow job template extra variables For more information see the Extra variables section.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/controller-workflow-job-templates
Chapter 3. New Features
Chapter 3. New Features This chapter documents new features and major enhancements introduced in Red Hat Enterprise Linux 7.9. 3.1. Authentication and Interoperability The Certificate Profiles extension no longer has a maximum number of policies per certificate Previously, administrators could not add more than 20 policies to a certificate because of a hardcoded limit within the Certificate Profiles extension. This update removes the restriction, so you can add an unlimited number of policies to a certificate. In addition, the extension requires at least one policy, otherwise the pkiconsole interface shows an error. If you modify the profile, the extension creates one empty policy. For example: (BZ#1768718) SSSD rebased to version 1.16.5 The sssd packages have been upgraded to upstream version 1.16.5, which provides a number of bug fixes and enhancements over the version. ( BZ#1796352 ) 3.2. Clustering pacemaker rebased to version 1.1.23 The Pacemaker cluster resource manager has been upgraded to upstream version 1.1.23, which provides a number of bug fixes. ( BZ#1792492 ) 3.3. Compiler and Tools The per-thread metrics is now available for historical analysis Optionally, enable logging of the per-thread and per-process performance metric values in the Performance Co-Pilot (PCP) using the pcp-zeroconf package and pmieconf utility. Previously, only the per-process metric values were logged by pmlogger through the pcp-zeroconf package, but some analysis situation also requires per-thread values. As a result, the per-thread metrics are now available for historical analysis, after executing the following command: ( BZ#1775373 ) 3.4. Desktop FreeRDP has been updated to 2.1.1 This release updates the FreeRDP implementation of the Remote Desktop Protocol (RDP) from version 2.0.0 to 2.1.1. FreeRDP 2.1.1 supports new RDP options for the current Microsoft Windows terminal server version and fixes several security issues. For detailed information about FreeRDP 2.1.1, see the upstream release notes: https://github.com/FreeRDP/FreeRDP/blob/2.1.1/ChangeLog . ( BZ#1834286 ) 3.5. Kernel Kernel version in RHEL 7.9 Red Hat Enterprise Linux 7.9 is distributed with the kernel version 3.10.0-1160. See also Important Changes to External Kernel Parameters and Device Drivers . ( BZ#1801759 ) A new kernel parameter: page_owner The page owner tracking is a new functionality, which enables users to observe the kernel memory consumption at the page allocator level. Users can employ this functionality to debug the kernel memory leaks, or to discover the kernel modules that consume excessive amounts of memory. To enable the feature, add the page_owner=on parameter to the kernel command-line. For more information on how to set the kernel command-line parameters, see the Configuring kernel command-line parameters on Customer Portal. Warning Regardless of the page_owner parameter setting ( on or off ) to the kernel command-line, usage of the page owner tracking adds approximately 2.14% additional memory requirement on RHEL 7.9 systems (impacts the kernel, VM, or cgroup ). For further details on this topic, see the Why Kernel-3.10.0-1160.el7 consumes double amount of memory compared to kernel-3.10.0-1127.el7? Solution. For more information about important changes to kernel parameters, see the New kernel parameters section. (BZ#1781726) EDAC driver support is now added to Intel ICX systems This update adds the Error Detection and Correction (EDAC) driver to Intel ICX systems. As a result, memory errors can be detected on these systems and reported to the EDAC subsystem. (BZ#1514705) Intel(R) Omni-Path Architecture (OPA) Host Software Intel(R) Omni-Path Architecture (OPA) host software is fully supported in Red Hat Enterprise Linux 7.9. Intel OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. ( BZ#1855010 ) The Mellanox ConnectX-6 Dx network adapter is now fully supported This enhancement adds the PCI IDs of the Mellanox ConnectX-6 Dx network adapter to the mlx5_core driver. On hosts that use this adapter, RHEL loads the mlx5_core driver automatically. This feature, previously available as a technology preview, is now fully supported in RHEL 7.9. (BZ#1829777) 3.6. Real-Time Kernel The kernel-rt source tree now matches the latest RHEL 7 tree The kernel-rt sources have been updated to use the latest RHEL kernel source tree, which provides a number of bug fixes and enhancements over the version. (BZ#1790643) 3.7. Networking Configuring unbound to run inside chroot for systems without SELinux For systems with SELinux enabled and in enforcing mode, SELinux provides significant protection and limits what the unbound service can access. If you cannot configure SELinux in enforcing mode, and you want to increase the protection of the unbound domain name server, use the chroot utility for jailing unbound into a limited chroot environment. Note that the protection by chroot is lower in comparison to SELinux enforcing mode. For configuring unbound to run inside chroot , prepare your environment as described in the following support article Running unbound in chroot . ( BZ#2121623 ) 3.8. Red Hat Enterprise Linux System Roles rhel-system-roles updated The rhel-system-roles package has been updated to provide multiple bug fixes and enhancements. Notable changes include: Support for 802.1X authentication with EAP-TLS was added for the network RHEL System Role when using the NetworkManager provider. As a result, now customers can configure their machines to use 802.1X authentication with EAP-TLS using the network RHEL System Role instead of having to use the nmcli command-line utility. The network RHEL System Role tries to modify a link or network attributes without disrupting the connectivity, when possible. The logging in network module logs has been fixed so that informative messages are no longer printed as warnings, but as debugging information. The network RHEL System Role now uses NetworkManagers capability to revert changes, if an error occurs, when applying the configuration to avoid partial changes. ( BZ#1767177 ) 3.9. Security SCAP Security Guide now provides a profile aligned with the CIS RHEL 7 Benchmark v2.2.0 With this update, the scap-security-guide packages provide a profile aligned with the CIS Red Hat Enterprise Linux 7 Benchmark v2.2.0. The profile enables you to harden the configuration of the system using the guidelines by the Center for Internet Security (CIS). As a result, you can configure and automate compliance of your RHEL 7 systems with CIS by using the CIS Ansible Playbook and the CIS SCAP profile. Note that the rpm_verify_permissions rule in the CIS profile does not work correctly. See the known issue description rpm_verify_permissions fails in the CIS profile . ( BZ#1821633 ) SCAP Security Guide now correctly disables services With this update, the SCAP Security Guide (SSG) profiles correctly disable and mask services that should not be started. This guarantees that disabled services are not inadvertently started as a dependency of another service. Before this change, the SSG profiles such as the U.S. Government Commercial Cloud Services (C2S) profile only disabled the service. As a result, services disabled by an SSG profile cannot be started unless you unmask them first. ( BZ#1791583 ) The RHEL 7 STIG security profile updated to version V3R1 With the RHBA-2020:5451 advisory, the DISA STIG for Red Hat Enterprise Linux 7 profile in the SCAP Security Guide has been updated to the latest version V3R1 . This update adds more coverage and fixes reference problems. The profile is now also more stable and better aligns with the RHEL7 STIG benchmark provided by the Defense Information Systems Agency (DISA). You should use only the current version of this profile because the older versions of this profile are no longer valid. The OVAL checks for several rules have changed, and scans using the V3R1 version will fail for systems that were hardened using older versions of SCAP Security Guide. You can fix the rules automatically by running the remediation with the new version of SCAP Security Guide. Warning Automatic remediation might render the system non-functional. Run the remediation in a test environment first. The following rules have been changed: CCE-80224-9 The default value of this SSHD configuration has changed from delayed to yes . You must now provide a value according to recommendations. Check the rule description for information about fixing this problem or run the remediation to fix it automatically. CCE-80393-2 xccdf_org.ssgproject.content_rule_audit_rules_execution_chcon CCE-80394-0 xccdf_org.ssgproject.content_rule_audit_rules_execution_restorecon CCE-80391-6 xccdf_org.ssgproject.content_rule_audit_rules_execution_semanage CCE-80660-4 xccdf_org.ssgproject.content_rule_audit_rules_execution_setfiles CCE-80392-4 xccdf_org.ssgproject.content_rule_audit_rules_execution_setsebool CCE-82362-5 xccdf_org.ssgproject.content_rule_audit_rules_execution_seunshare CCE-80398-1 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_chage CCE-80404-7 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_chsh CCE-80410-4 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_crontab CCE-80397-3 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_gpasswd CCE-80403-9 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_newgrp CCE-80411-2 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_pam_timestamp_check CCE-27437-3 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands CCE-80395-7 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_passwd CCE-80406-2 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_postdrop CCE-80407-0 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_postqueue CCE-80408-8 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_ssh_keysign CCE-80402-1 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_sudoedit CCE-80401-3 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_sudo CCE-80400-5 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_su CCE-80405-4 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_umount CCE-80396-5 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_unix_chkpwd CCE-80399-9 xccdf_org.ssgproject.content_rule_audit_rules_privileged_commands_userhelper ( BZ#1665233 ) Profiles for DISA STIG version v3r3 The Defense Information Systems Agency (DISA) has published an updated version of the Secure Technical Implementation Guide (STIG) for RHEL 7 version 3, release 3. The update available with the RHBA-2021:2803 advisory: Aligns all rules within the existing xccdf_org.ssgproject.content_profile_stig profile with the latest STIG release. Adds a new profile xccdf_org.ssgproject.content_profile_stig_gui for systems with a graphical user interface (GUI). ( BZ#1958789 , BZ#1970131 ) scap-security-guide now provides an ANSSI-BP-028 High hardening level profile With the release of the RHBA-2021:2803 advisory, the scap-security-guide packages provide an updated profile for ANSSI-BP-028 at the High hardening level. This addition completes the availability of profiles for all ANSSI-BP-028 v1.2 hardening levels. Using the updated profile, you can configure the system to comply with the recommendations from the French National Security Agency (ANSSI) for GNU/Linux Systems at the High hardening level. As a result, you can configure and automate compliance of your RHEL 7 systems according to your required ANSSI hardening level by using the ANSSI Ansible Playbooks and the ANSSI SCAP profiles. The Draft ANSSI High profile provided with the versions has been aligned to ANSSI DAT-NT-028. Although the profile names and versions have changed, the IDs of the ANSSI profiles such as xccdf_org.ssgproject.content_profile_anssi_nt28_high remain the same to ensure backward compatibility. WARNING Automatic remediation might render the system non-functional. Red Hat recommends running the remediation in a test environment first. ( BZ#1955180 ) The RHEL 8 STIG profile is now better aligned with the DISA STIG content The DISA STIG for Red Hat Enterprise Linux 7 profile ( xccdf_org.ssgproject.content_profile_stig ) available in the scap-security-guide (SSG) package can be used to evaluate systems according to the Security Technical Implementation Guides (STIG) by the Defense Information Systems Agency (DISA). You can remediate your systems by using the content in SSG, but you might need to evaluate them using DISA STIG automated content. With the release of the RHBA-2022:6576 advisory, the DISA STIG RHEL 7 profile is better aligned with DISA's content. This leads to fewer findings against DISA content after SSG remediation. Note that the evaluations of the following rules still diverge: SV-204511r603261_rule - CCE-80539-0 ( auditd_audispd_disk_full_action ) SV-204597r792834_rule - CCE-27485-2 ( file_permissions_sshd_private_key ) Also, rule SV-204405r603261_rule from DISA's RHEL 7 STIG is not covered in the SSG RHEL 7 STIG profiles. (BZ#1967950) A warning message to configure Audit log buffer for large systems added to SCAP rule audit_rules_for_ospp The SCAP rule xccdf_org.ssgproject.content_rule_audit_rules_for_ospp now displays a performance warning on large systems where the Audit log buffer configured by this rule might be too small, and can override the custom value. The warning also describes the process to configure a larger Audit log buffer. With the release of the RHBA-2022:6576 advisory, you can keep large systems compliant and correctly set their Audit log buffer. ( BZ#1993822 ) 3.10. Servers and Services New package: compat-unixODBC234 for SAP The new compat-unixODBC234 package provides version 2.3.4 of unixODBC , a framework that supports accessing databases through the ODBC protocol. This new package is available in the RHEL 7 for SAP Solutions sap-hana repository to enable streaming backup of an SAP HANA database using the SAP backint interface. For more information, see Overview of the Red Hat Enterprise Linux for SAP Solutions subscription . The compat-unixODBC234 package conflicts with the base RHEL 7 unixODBC package. Therefore, uninstall unixODBC prior to installing compat-unixODBC234 . This package is also available for Red Hat Enterprise Linux 7.4 Update Services for SAP Solutions, Red Hat Enterprise Linux 7.6 Extended Update Support, and Red Hat Enterprise Linux 7.7 Extended Update Support through the RHEA-2020:2178 advisory. See also The compat-unixODBC234 package for SAP requires a symlink to load the unixODBC library . (BZ#1790655) MariaDB rebased to version 5.5.68 With RHEL 7.9, the MariaDB database server has been updated to version 5.5.68. This release provides multiple security and bug fixes from the recent upstream maintenance releases. ( BZ#1834835 ) 3.11. Storage Support for Data Integrity Field/Data Integrity Extension (DIF/DIX) DIF/DIX is supported on configurations where the hardware vendor has qualified it and provides full support for the particular host bus adapter (HBA) and storage array configuration on RHEL. DIF/DIX is not supported on the following configurations: It is not supported for use on the boot device. It is not supported on virtualized guests. Red Hat does not support using the Automatic Storage Management library (ASMLib) when DIF/DIX is enabled. DIF/DIX is enabled or disabled at the storage device, which involves various layers up to (and including) the application. The method for activating the DIF on storage devices is device-dependent. For further information on the DIF/DIX feature, see What is DIF/DIX . (BZ#1649493) 3.12. Atomic Host and Containers Red Hat Enterprise Linux Atomic Host is a secure, lightweight, and minimal-footprint operating system optimized to run Linux containers. Important Red Hat Enterprise Linux Atomic Host is retired as of August 6, 2020 and active support is no longer provided. 3.13. Red Hat Software Collections Red Hat Software Collections (RHSCL) is a Red Hat content set that provides a set of dynamic programming languages, database servers, and related packages that you can install and use on all supported releases of Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures, IBM Z, and IBM POWER, little endian. Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection, GNU Debugger, and other development, debugging, and performance monitoring tools. Red Hat Developer Toolset is included as a separate Software Collection. Dynamic languages, database servers, and other tools distributed with Red Hat Software Collections do not replace the default system tools provided with Red Hat Enterprise Linux, nor are they used in preference to these tools. Red Hat Software Collections uses an alternative packaging mechanism based on the scl utility to provide a parallel set of packages. This set enables optional use of alternative package versions on Red Hat Enterprise Linux. By using the scl utility, users can choose which package version they want to run at any time. Important Red Hat Software Collections has a shorter life cycle and support term than Red Hat Enterprise Linux. For more information, see the Red Hat Software Collections Product Life Cycle . See the Red Hat Software Collections documentation for the components included in the set, system requirements, known problems, usage, and specifics of individual Software Collections. See the Red Hat Developer Toolset documentation for more information about the components included in this Software Collection, installation, usage, known problems, and more.
[ "Identifier: Certificate Policies: - 2.5.29.32 Critical: no Certificate Policies:", "pmieconf -c enable zeroconf.all_threads" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.9_release_notes/new_features
function::symline
function::symline Name function::symline - Return the line number of an address. Synopsis Arguments addr The address to translate. Description Returns the (approximate) line number of the given address, if known. If the line number cannot be found, the hex string representation of the address will be returned.
[ "symline:string(addr:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-symline
Chapter 18. Running Red Hat Process Automation Manager
Chapter 18. Running Red Hat Process Automation Manager Use this procedure to run Red Hat Process Automation Manager on Red Hat JBoss EAP in standalone mode. Prerequisites Red Hat Process Automation Manager is installed and configured. Note If you changed the default host ( localhost ) or the default port ( 9990 ), then before you run Red Hat Process Automation Manager, you must edit the business-central.war/WEB-INF/classes/datasource-management.properties and business-central.war/WEB-INF/classes/security-management.properties files as described in Solution 3519551 . Procedure In a terminal application, navigate to EAP_HOME /bin . Run the standalone configuration: On Linux or UNIX-based systems: USD ./standalone.sh -c standalone-full.xml On Windows: standalone.bat -c standalone-full.xml Note If you deployed Business Central without KIE Server, you can start Business Central with the standalone.sh script without specifying the standalone-full.xml file. In this case, ensure that you make any configuration changes to the standalone.xml file before starting Business Central. On Linux or UNIX-based systems: On Windows: standalone.bat In a web browser, open the URL localhost:8080/business-central . If you configured Red Hat Process Automation Manager to run from a domain name, replace localhost with the domain name, for example: http://www.example.com:8080/business-central Log in using the credentials of the user that you created for Business Central in Section 14.3, "Creating users" .
[ "./standalone.sh -c standalone-full.xml", "standalone.bat -c standalone-full.xml", "./standalone.sh", "standalone.bat" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/eap-ba-dm-run-proc_install-on-eap
11.2. Performing a Driver Update During Installation
11.2. Performing a Driver Update During Installation At the very beginning of the installation process, you can perform a driver update in the following ways: let the installation program automatically find and offer a driver update for installation, let the installation program prompt you to locate a driver update, manually specify a path to a driver update image or an RPM package. Important Always make sure to put your driver update discs on a standard disk partition. Advanced storage, such as RAID or LVM volumes, might not be accessible during the early stage of the installation when you perform driver updates. 11.2.1. Automatic Driver Update To have the installation program automatically recognize a driver update disc, connect a block device with the OEMDRV volume label to your computer before starting the installation process. Note Starting with Red Hat Enterprise Linux 7.2, you can also use the OEMDRV block device to automatically load a Kickstart file. This file must be named ks.cfg and placed in the root of the device to be loaded. See Chapter 27, Kickstart Installations for more information about Kickstart installations. When the installation begins, the installation program detects all available storage connected to the system. If it finds a storage device labeled OEMDRV , it will treat it as a driver update disc and attempt to load driver updates from this device. You will be prompted to select which drivers to load: Figure 11.1. Selecting a Driver Use number keys to toggle selection on individual drivers. When ready, press c to install the selected drivers and proceed to the Anaconda graphical user interface. 11.2.2. Assisted Driver Update It is always recommended to have a block device with the OEMDRV volume label available to install a driver during installation. However, if no such device is detected and the inst.dd option was specified at the boot command line, the installation program lets you find the driver disk in interactive mode. In the first step, select a local disk partition from the list for Anaconda to scan for ISO files. Then, select one of the detected ISO files. Finally, select one or more available drivers. The image below demonstrates the process in the text user interface with individual steps highlighted. Figure 11.2. Selecting a Driver Interactively Note If you extracted your ISO image file and burned it on a CD or DVD but the media does not have the OEMDRV volume label, either use the inst.dd option with no arguments and use the menu to select the device, or use the following boot option for the installation program to scan the media for drivers: Hit number keys to toggle selection on individual drivers. When ready, press c to install the selected drivers and proceed to the Anaconda graphical user interface. 11.2.3. Manual Driver Update For manual driver installation, prepare an ISO image file containing your drivers to an accessible location, such a USB flash drive or a web server, and connect it to your computer. At the welcome screen, hit Tab to display the boot command line and append the inst.dd= location to it, where location is a path to the driver update disc: Figure 11.3. Specifying a Path to a Driver Update Typically, the image file is located on a web server (for example, http://server.example.com/dd.iso ) or on a USB flash drive (for example, /dev/sdb1 ). It is also possible to specify an RPM package containing the driver update (for example http://server.example.com/dd.rpm ). When ready, hit Enter to execute the boot command. Then, your selected drivers will be loaded and the installation process will proceed normally 11.2.4. Blacklisting a Driver A malfunctioning driver can prevent a system from booting normally during installation. When this happens, you can disable (or blacklist) the driver by customizing the boot command line. At the boot menu, display the boot command line by hitting the Tab key. Then, append the modprobe.blacklist= driver_name option to it. Replace driver_name with names of a driver or drivers you want to disable, for example: Note that the drivers blacklisted during installation using the modprobe.blacklist= boot option will remain disabled on the installed system and appear in the /etc/modprobe.d/anaconda-blacklist.conf file. See Chapter 23, Boot Options for more information about blacklisting drivers and other boot options.
[ "inst.dd=/dev/sr0", "modprobe.blacklist=ahci" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-driver-updates-performing-ppc
10.4. Exporting and Importing an Encrypted Database
10.4. Exporting and Importing an Encrypted Database Exporting and importing encrypted databases is similar to exporting and importing regular databases. However, the encrypted information must be decrypted when you export the data and re-encrypted when you reimport it to the database. 10.4.1. Exporting an Encrypted Database To export data from an encrypted database, pass the -E parameter to the dsconf command. For example, to export the complete userRoot database with decrypted attributes: Alternatively, you can export only a specific subtree. For example, to export all data from the ou=People,dc=example,dc=com entry: For further details about using dsconf to export data, see Section 6.2.1.1.1, "Exporting a Databases Using the dsconf backend export Command" . 10.4.2. Importing an LDIF File into an Encrypted Database To import data to a database when attribute encryption is enabled: Stop the Directory Server instance: If you replaced the certificate database between the last export and this import, edit the /etc/dirsrv/slapd- instance_name /dse.ldif file, and remove the following entries including their attributes: cn=AES,cn=encrypted attribute keys,cn= database_name ,cn=ldbm database,cn=plugins,cn=config cn=3DES,cn=encrypted attribute keys,cn= database_name ,cn=ldbm database,cn=plugins,cn=config Important Remove the entries for all databases. If any entry that contains the nsSymmetricKey attribute is left in the /etc/dirsrv/slapd- instance_name /dse.ldif file, Directory Server will fail to start. Import the LDIF file. For example, to import the /tmp/example.ldif into the userRoot database: The --encrypted parameter enables the script to encrypt attributes configured for encryption during the import. Start the instance:
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend export -E userRoot", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend export -E -s \"ou=People,dc=example,dc=com\" userRoot", "dsctl instance_name stop", "dsctl instance_name ldif2db --encrypted userRoot /tmp/example.ldif", "dsctl instance_name start" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Database_Encryption-Exporting_and_Importing_an_Encrypted_Database
Chapter 78. Kubernetes Persistent Volume Claim
Chapter 78. Kubernetes Persistent Volume Claim Since Camel 2.17 Only producer is supported The Kubernetes Persistent Volume Claim component is one of the Kubernetes Components which provides a producer to execute Kubernetes Persistent Volume Claims operations. 78.1. Dependencies When using kubernetes-persistent-volumes-claims with Red Hat build of Apache Camel for Spring Boot,use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 78.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 78.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 78.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 78.3. Component Options The Kubernetes Persistent Volume Claim component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 78.4. Endpoint Options The Kubernetes Persistent Volume Claim endpoint is configured using URI syntax: with the following path and query parameters: 78.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 78.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 78.5. Message Headers The Kubernetes Persistent Volume Claim component supports 5 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesPersistentVolumesClaimsLabels (producer) Constant: KUBERNETES_PERSISTENT_VOLUMES_CLAIMS_LABELS The persistent volume claim labels. Map CamelKubernetesPersistentVolumeClaimName (producer) Constant: KUBERNETES_PERSISTENT_VOLUME_CLAIM_NAME The persistent volume claim name. String CamelKubernetesPersistentVolumeClaimSpec (producer) Constant: KUBERNETES_PERSISTENT_VOLUME_CLAIM_SPEC The spec for a persistent volume claim. PersistentVolumeClaimSpec 78.6. Supported producer operation listPersistentVolumesClaims listPersistentVolumesClaimsByLabels getPersistentVolumeClaim createPersistentVolumeClaim updatePersistentVolumeClaim deletePersistentVolumeClaim 78.7. Kubernetes Persistent Volume Claims Producer Examples listPersistentVolumesClaims: this operation lists the pvc on a kubernetes cluster. from("direct:list"). toF("kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaims"). to("mock:result"); This operation returns a List of pvc from your cluster. listPersistentVolumesClaimsByLabels: this operation lists the pvc by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PERSISTENT_VOLUMES_CLAIMS_LABELS, labels); } }); toF("kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaimsByLabels"). to("mock:result"); This operation returns a List of pvc from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 78.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-persistent-volumes-claims:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaims\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PERSISTENT_VOLUMES_CLAIMS_LABELS, labels); } }); toF(\"kubernetes-persistent-volumes-claims:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesClaimsByLabels\"). to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-persistent-volume-claim-component-starter
Chapter 6. Installing a cluster on OpenStack with Kuryr on your own infrastructure
Chapter 6. Installing a cluster on OpenStack with Kuryr on your own infrastructure Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. In OpenShift Container Platform version 4.13, you can install a cluster on Red Hat OpenStack Platform (RHOSP) that runs on user-provisioned infrastructure. Using your own infrastructure allows you to integrate your cluster with existing infrastructure and modifications. The process requires more labor on your part than installer-provisioned installations, because you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. However, Red Hat provides Ansible playbooks to help you in the deployment process. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.13 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You have an RHOSP account where you want to install OpenShift Container Platform. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . On the machine from which you run the installation program, you have: A single directory in which you can keep the files you create during the installation process Python 3 6.2. About Kuryr SDN Important Kuryr is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Kuryr is a container network interface (CNI) plugin solution that uses the Neutron and Octavia Red Hat OpenStack Platform (RHOSP) services to provide networking for pods and Services. Kuryr and OpenShift Container Platform integration is primarily designed for OpenShift Container Platform clusters running on RHOSP VMs. Kuryr improves the network performance by plugging OpenShift Container Platform pods into RHOSP SDN. In addition, it provides interconnectivity between pods and RHOSP virtual instances. Kuryr components are installed as pods in OpenShift Container Platform using the openshift-kuryr namespace: kuryr-controller - a single service instance installed on a master node. This is modeled in OpenShift Container Platform as a Deployment object. kuryr-cni - a container installing and configuring Kuryr as a CNI driver on each OpenShift Container Platform node. This is modeled in OpenShift Container Platform as a DaemonSet object. The Kuryr controller watches the OpenShift Container Platform API server for pod, service, and namespace create, update, and delete events. It maps the OpenShift Container Platform API calls to corresponding objects in Neutron and Octavia. This means that every network solution that implements the Neutron trunk port functionality can be used to back OpenShift Container Platform via Kuryr. This includes open source solutions such as Open vSwitch (OVS) and Open Virtual Network (OVN) as well as Neutron-compatible commercial SDNs. Kuryr is recommended for OpenShift Container Platform deployments on encapsulated RHOSP tenant networks to avoid double encapsulation, such as running an encapsulated OpenShift Container Platform SDN over an RHOSP network. If you use provider networks or tenant VLANs, you do not need to use Kuryr to avoid double encapsulation. The performance benefit is negligible. Depending on your configuration, though, using Kuryr to avoid having two overlays might still be beneficial. Kuryr is not recommended in deployments where all of the following criteria are true: The RHOSP version is less than 16. The deployment uses UDP services, or a large number of TCP services on few hypervisors. or The ovn-octavia Octavia driver is disabled. The deployment uses a large number of TCP services on few hypervisors. 6.3. Resource guidelines for installing OpenShift Container Platform on RHOSP with Kuryr When using Kuryr SDN, the pods, services, namespaces, and network policies are using resources from the RHOSP quota; this increases the minimum requirements. Kuryr also has some additional requirements on top of what a default install requires. Use the following quota to satisfy a default cluster's minimum requirements: Table 6.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP with Kuryr Resource Value Floating IP addresses 3 - plus the expected number of Services of LoadBalancer type Ports 1500 - 1 needed per Pod Routers 1 Subnets 250 - 1 needed per Namespace/Project Networks 250 - 1 needed per Namespace/Project RAM 112 GB vCPUs 28 Volume storage 275 GB Instances 7 Security groups 250 - 1 needed per Service and per NetworkPolicy Security group rules 1000 Server groups 2 - plus 1 for each additional availability zone in each machine pool Load balancers 100 - 1 needed per Service Load balancer listeners 500 - 1 needed per Service-exposed port Load balancer pools 500 - 1 needed per Service-exposed port A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Important If you are using Red Hat OpenStack Platform (RHOSP) version 16 with the Amphora driver rather than the OVN Octavia driver, security groups are associated with service accounts instead of user projects. Take the following notes into consideration when setting resources: The number of ports that are required is larger than the number of pods. Kuryr uses ports pools to have pre-created ports ready to be used by pods and speed up the pods' booting time. Each network policy is mapped into an RHOSP security group, and depending on the NetworkPolicy spec, one or more rules are added to the security group. Each service is mapped to an RHOSP load balancer. Consider this requirement when estimating the number of security groups required for the quota. If you are using RHOSP version 15 or earlier, or the ovn-octavia driver , each load balancer has a security group with the user project. The quota does not account for load balancer resources (such as VM resources), but you must consider these resources when you decide the RHOSP deployment's size. The default installation will have more than 50 load balancers; the clusters must be able to accommodate them. If you are using RHOSP version 16 with the OVN Octavia driver enabled, only one load balancer VM is generated; services are load balanced through OVN flows. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. To enable Kuryr SDN, your environment must meet the following requirements: Run RHOSP 13+. Have Overcloud with Octavia. Use Neutron Trunk ports extension. Use openvswitch firewall driver if ML2/OVS Neutron driver is used instead of ovs-hybrid . 6.3.1. Increasing quota When using Kuryr SDN, you must increase quotas to satisfy the Red Hat OpenStack Platform (RHOSP) resources used by pods, services, namespaces, and network policies. Procedure Increase the quotas for a project by running the following command: USD sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project> 6.3.2. Configuring Neutron Kuryr CNI leverages the Neutron Trunks extension to plug containers into the Red Hat OpenStack Platform (RHOSP) SDN, so you must use the trunks extension for Kuryr to properly work. In addition, if you leverage the default ML2/OVS Neutron driver, the firewall must be set to openvswitch instead of ovs_hybrid so that security groups are enforced on trunk subports and Kuryr can properly handle network policies. 6.3.3. Configuring Octavia Kuryr SDN uses Red Hat OpenStack Platform (RHOSP)'s Octavia LBaaS to implement OpenShift Container Platform services. Thus, you must install and configure Octavia components in RHOSP to use Kuryr SDN. To enable Octavia, you must include the Octavia service during the installation of the RHOSP Overcloud, or upgrade the Octavia service if the Overcloud already exists. The following steps for enabling Octavia apply to both a clean install of the Overcloud or an Overcloud update. Note The following steps only capture the key pieces required during the deployment of RHOSP when dealing with Octavia. It is also important to note that registry methods vary. This example uses the local registry method. Procedure If you are using the local registry, create a template to upload the images to the registry. For example: (undercloud) USD openstack overcloud container image prepare \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ --namespace=registry.access.redhat.com/rhosp13 \ --push-destination=<local-ip-from-undercloud.conf>:8787 \ --prefix=openstack- \ --tag-from-label {version}-{product-version} \ --output-env-file=/home/stack/templates/overcloud_images.yaml \ --output-images-file /home/stack/local_registry_images.yaml Verify that the local_registry_images.yaml file contains the Octavia images. For example: ... - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787 Note The Octavia container versions vary depending upon the specific RHOSP release installed. Pull the container images from registry.redhat.io to the Undercloud node: (undercloud) USD sudo openstack overcloud container image upload \ --config-file /home/stack/local_registry_images.yaml \ --verbose This may take some time depending on the speed of your network and Undercloud disk. Install or update your Overcloud environment with Octavia: USD openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml \ -e octavia_timeouts.yaml Note This command only includes the files associated with Octavia; it varies based on your specific installation of RHOSP. See the RHOSP documentation for further information. For more information on customizing your Octavia installation, see installation of Octavia using Director . Note When leveraging Kuryr SDN, the Overcloud installation requires the Neutron trunk extension. This is available by default on director deployments. Use the openvswitch firewall instead of the default ovs-hybrid when the Neutron backend is ML2/OVS. There is no need for modifications if the backend is ML2/OVN. 6.3.3.1. The Octavia OVN Driver Octavia supports multiple provider drivers through the Octavia API. To see all available Octavia provider drivers, on a command line, enter: USD openstack loadbalancer provider list Example output +---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+ Beginning with RHOSP version 16, the Octavia OVN provider driver ( ovn ) is supported on OpenShift Container Platform on RHOSP deployments. ovn is an integration driver for the load balancing that Octavia and OVN provide. It supports basic load balancing capabilities, and is based on OpenFlow rules. The driver is automatically enabled in Octavia by Director on deployments that use OVN Neutron ML2. The Amphora provider driver is the default driver. If ovn is enabled, however, Kuryr uses it. If Kuryr uses ovn instead of Amphora, it offers the following benefits: Decreased resource requirements. Kuryr does not require a load balancer VM for each service. Reduced network latency. Increased service creation speed by using OpenFlow rules instead of a VM for each service. Distributed load balancing actions across all nodes instead of centralized on Amphora VMs. 6.3.4. Known limitations of installing with Kuryr Using OpenShift Container Platform with Kuryr SDN has several known limitations. RHOSP general limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that apply to all versions and environments: Service objects with the NodePort type are not supported. Clusters that use the OVN Octavia provider driver support Service objects for which the .spec.selector property is unspecified only if the .subsets.addresses property of the Endpoints object includes the subnet of the nodes or pods. If the subnet on which machines are created is not connected to a router, or if the subnet is connected, but the router has no external gateway set, Kuryr cannot create floating IPs for Service objects with type LoadBalancer . Configuring the sessionAffinity=ClientIP property on Service objects does not have an effect. Kuryr does not support this setting. RHOSP version limitations Using OpenShift Container Platform with Kuryr SDN has several limitations that depend on the RHOSP version. RHOSP versions before 16 use the default Octavia load balancer driver (Amphora). This driver requires that one Amphora load balancer VM is deployed per OpenShift Container Platform service. Creating too many services can cause you to run out of resources. Deployments of later versions of RHOSP that have the OVN Octavia driver disabled also use the Amphora driver. They are subject to the same resource concerns as earlier versions of RHOSP. Kuryr SDN does not support automatic unidling by a service. RHOSP upgrade limitations As a result of the RHOSP upgrade process, the Octavia API might be changed, and upgrades to the Amphora images that are used for load balancers might be required. You can address API changes on an individual basis. If the Amphora image is upgraded, the RHOSP operator can handle existing load balancer VMs in two ways: Upgrade each VM by triggering a load balancer failover . Leave responsibility for upgrading the VMs to users. If the operator takes the first option, there might be short downtimes during failovers. If the operator takes the second option, the existing load balancers will not support upgraded Octavia API features, like UDP listeners. In this case, users must recreate their Services to use these features. 6.3.5. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 6.3.6. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 6.3.7. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 6.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.5. Downloading playbook dependencies The Ansible playbooks that simplify the installation process on user-provisioned infrastructure require several Python modules. On the machine where you will run the installer, add the modules' repositories and then download them. Note These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8. Prerequisites Python 3 is installed on your machine. Procedure On a command line, add the repositories: Register with Red Hat Subscription Manager: USD sudo subscription-manager register # If not done already Pull the latest subscription data: USD sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already Disable the current repositories: USD sudo subscription-manager repos --disable=* # If not done already Add the required repositories: USD sudo subscription-manager repos \ --enable=rhel-8-for-x86_64-baseos-rpms \ --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \ --enable=ansible-2.9-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-rpms Install the modules: USD sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack Ensure that the python command points to python3 : USD sudo alternatives --set python /usr/bin/python3 6.6. Downloading the installation playbooks Download Ansible playbooks that you can use to install OpenShift Container Platform on your own Red Hat OpenStack Platform (RHOSP) infrastructure. Prerequisites The curl command-line tool is available on your machine. Procedure To download the playbooks to your working directory, run the following script from a command line: USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-containers.yaml' The playbooks are downloaded to your machine. Important During the installation process, you can modify the playbooks to configure your deployment. Retain all playbooks for the life of your cluster. You must have the playbooks to remove your OpenShift Container Platform cluster from RHOSP. Important You must match any edits you make in the bootstrap.yaml , compute-nodes.yaml , control-plane.yaml , network.yaml , and security-groups.yaml files to the corresponding playbooks that are prefixed with down- . For example, edits to the bootstrap.yaml file must be reflected in the down-bootstrap.yaml file, too. If you do not edit both files, the supported cluster removal process will fail. 6.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.8. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.9. Creating the Red Hat Enterprise Linux CoreOS (RHCOS) image The OpenShift Container Platform installation program requires that a Red Hat Enterprise Linux CoreOS (RHCOS) image be present in the Red Hat OpenStack Platform (RHOSP) cluster. Retrieve the latest RHCOS image, then upload it using the RHOSP CLI. Prerequisites The RHOSP CLI is installed. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.13 for Red Hat Enterprise Linux (RHEL) 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) . Decompress the image. Note You must decompress the RHOSP image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: USD file <name_of_downloaded_file> From the image that you downloaded, create an image that is named rhcos in your cluster by using the RHOSP CLI: USD openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. After you upload the image to RHOSP, it is usable in the installation process. 6.10. Verifying external network access The OpenShift Container Platform installation process requires external network access. You must provide an external network value to it, or deployment fails. Before you begin the process, verify that a network with the external router type exists in Red Hat OpenStack Platform (RHOSP). Prerequisites Configure OpenStack's networking service to have DHCP agents forward instances' DNS queries Procedure Using the RHOSP CLI, verify the name and ID of the 'External' network: USD openstack network list --long -c ID -c Name -c "Router Type" Example output +--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+ A network with an external router type appears in the network list. If at least one does not, see Creating a default floating IP network and Creating a default provider network . Note If the Neutron trunk service plugin is enabled, a trunk port is created by default. For more information, see Neutron trunk port . 6.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 6.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API, cluster applications, and the bootstrap process. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> By using the Red Hat OpenStack Platform (RHOSP) CLI, create the bootstrap FIP: USD openstack floating ip create --description "bootstrap machine" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the inventory.yaml file as the values of the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you use these values, you must also enter an external network as the value of the os_external_network variable in the inventory.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 6.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the inventory.yaml file, do not define the following variables: os_api_fip os_bootstrap_fip os_ingress_fip If you cannot provide an external network, you can also leave os_external_network blank. If you do not provide a value for os_external_network , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. Later in the installation process, when you create network resources, you must configure external connectivity on your own. If you run the installer with the wait-for command from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 6.12. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 6.13. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. You now have the file install-config.yaml in the directory that you specified. 6.14. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.14.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.2. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.14.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.3. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.14.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.4. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 6.14.4. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 6.5. Additional RHOSP parameters Parameter Description Values compute.platform.openstack.rootVolume.size For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . compute.platform.openstack.rootVolume.type For compute machines, the root volume's type. String, for example performance . controlPlane.platform.openstack.rootVolume.size For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . controlPlane.platform.openstack.rootVolume.type For control plane machines, the root volume's type. String, for example performance . platform.openstack.cloud The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. String, for example MyCloud . platform.openstack.externalNetwork The RHOSP external network name to be used for installation. String, for example external . platform.openstack.computeFlavor The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . 6.14.5. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 6.6. Optional RHOSP parameters Parameter Description Values compute.platform.openstack.additionalNetworkIDs Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . compute.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . compute.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . compute.platform.openstack.rootVolume.zones For compute machines, the availability zone to install root volumes on. If you do not set a value for this parameter, the installation program selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . compute.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . controlPlane.platform.openstack.additionalNetworkIDs Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. Additional networks that are attached to a control plane machine are also attached to the bootstrap node. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . controlPlane.platform.openstack.additionalSecurityGroupIDs Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . controlPlane.platform.openstack.zones RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . controlPlane.platform.openstack.rootVolume.zones For control plane machines, the availability zone to install root volumes on. If you do not set this value, the installation program selects the default availability zone. A list of strings, for example ["zone-1", "zone-2"] . controlPlane.platform.openstack.serverGroupPolicy Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . platform.openstack.clusterOSImage The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . platform.openstack.clusterOSImageProperties Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . platform.openstack.defaultMachinePlatform The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } platform.openstack.ingressFloatingIP An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.apiFloatingIP An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . platform.openstack.externalDNS IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . platform.openstack.loadbalancer Whether or not to use the default, internal load balancer. If the value is set to UserManaged , this default load balancer is disabled so that you can deploy a cluster that uses an external, user-managed load balancer. If the parameter is not set, or if the value is OpenShiftManagedDefault , the cluster uses the default load balancer. UserManaged or OpenShiftManagedDefault . platform.openstack.machinesSubnet The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 6.14.6. RHOSP parameters for failure domains Important RHOSP failure domains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat OpenStack Platform (RHOSP) deployments do not have a single implementation of failure domains. Instead, availability zones are defined individually for each service, such as the compute service, Nova; the networking service, Neutron; and the storage service, Cinder. Beginning with OpenShift Container Platform 4.13, there is a unified definition of failure domains for RHOSP deployments that covers all supported availability zone types. You can use failure domains to control related aspects of Nova, Neutron, and Cinder configurations from a single place. In RHOSP, a port describes a network connection and maps to an interface inside a compute machine. A port also: Is defined by a network or by one more or subnets Connects a machine to one or more subnets Failure domains group the services of your deployment by using ports. If you use failure domains, each machine connects to: The portTarget object with the ID control-plane while that object exists. All non-control-plane portTarget objects within its own failure domain. All networks in the machine pool's additionalNetworkIDs list. To configure failure domains for a machine pool, edit availability zone and port target parameters under controlPlane.platform.openstack.failureDomains . Table 6.7. RHOSP parameters for failure domains Parameter Description Values platform.openstack.failuredomains.computeAvailabilityZone An availability zone for the server. If not specified, the cluster default is used. The name of the availability zone. For example, nova-1 . platform.openstack.failuredomains.storageAvailabilityZone An availability zone for the root volume. If not specified, the cluster default is used. The name of the availability zone. For example, cinder-1 . platform.openstack.failuredomains.portTargets A list of portTarget objects, each of which defines a network connection to attach to machines within a failure domain. A list of portTarget objects. platform.openstack.failuredomains.portTargets.portTarget.id The ID of an individual port target. To select that port target as the first network for machines, set the value of this parameter to control-plane . If this parameter has a different value, it is ignored. control-plane or an arbitrary string. platform.openstack.failuredomains.portTargets.portTarget.network Required. The name or ID of the network to attach to machines in the failure domain. A network object that contains either a name or UUID. For example: network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6 or: network: name: my-network-1 platform.openstack.failuredomains.portTargets.portTarget.fixedIPs Subnets to allocate fixed IP addresses to. These subnets must exist within the same network as the port. A list of subnet objects. Note You cannot combine zone fields and failure domains. If you want to use failure domains, the controlPlane.zone and controlPlane.rootVolume.zone fields must be left unset. 6.14.7. Custom subnets in RHOSP deployments Optionally, you can deploy a cluster on a Red Hat OpenStack Platform (RHOSP) subnet of your choice. The subnet's GUID is passed as the value of platform.openstack.machinesSubnet in the install-config.yaml file. This subnet is used as the cluster's primary subnet. By default, nodes and ports are created on it. You can create nodes and ports on a different RHOSP subnet by setting the value of the platform.openstack.machinesSubnet property to the subnet's UUID. Before you run the OpenShift Container Platform installer with a custom subnet, verify that your configuration meets the following requirements: The subnet that is used by platform.openstack.machinesSubnet has DHCP enabled. The CIDR of platform.openstack.machinesSubnet matches the CIDR of networking.machineNetwork . The installation program user has permission to create ports on this network, including ports with fixed IP addresses. Clusters that use custom subnets have the following limitations: If you plan to install a cluster that uses floating IP addresses, the platform.openstack.machinesSubnet subnet must be attached to a router that is connected to the externalNetwork network. If the platform.openstack.machinesSubnet value is set in the install-config.yaml file, the installation program does not create a private network or subnet for your RHOSP machines. You cannot use the platform.openstack.externalDNS property at the same time as a custom subnet. To add DNS to a cluster that uses a custom subnet, configure DNS on the RHOSP network. Note By default, the API VIP takes x.x.x.5 and the Ingress VIP takes x.x.x.7 from your network's CIDR block. To override these default values, set values for platform.openstack.apiVIPs and platform.openstack.ingressVIPs that are outside of the DHCP allocation pool. Important The CIDR ranges for networks are not adjustable after cluster installation. Red Hat does not provide direct guidance on determining the range during cluster installation because it requires careful consideration of the number of created pods per namespace. 6.14.8. Sample customized install-config.yaml file for RHOSP with Kuryr To deploy with Kuryr SDN instead of the default OVN-Kubernetes network plugin, you must modify the install-config.yaml file to include Kuryr as the desired networking.networkType . This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... 1 The Amphora Octavia driver creates two ports per load balancer. As a result, the service subnet that the installer creates is twice the size of the CIDR that is specified as the value of the serviceNetwork property. The larger range is required to prevent IP address conflicts. 2 The cluster network plugin to install. The supported values are Kuryr , OVNKubernetes , and OpenShiftSDN . The default value is OVNKubernetes . 3 4 Both trunkSupport and octaviaSupport are automatically discovered by the installer, so there is no need to set them. But if your environment does not meet both requirements, Kuryr SDN will not properly work. Trunks are needed to connect the pods to the RHOSP network and Octavia is required to create the OpenShift Container Platform services. 6.14.9. Example installation configuration section that uses failure domains Important RHOSP failure domains is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The following section of an install-config.yaml file demonstrates the use of failure domains in a cluster to deploy on Red Hat OpenStack Platform (RHOSP): # ... controlPlane: name: master platform: openstack: type: m1.large failureDomains: - computeAvailabilityZone: 'nova-1' storageAvailabilityZone: 'cinder-1' portTargets: - id: control-plane network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6 - computeAvailabilityZone: 'nova-2' storageAvailabilityZone: 'cinder-2' portTargets: - id: control-plane network: id: 39a7b82a-a8a4-45a4-ba5a-288569a6edd1 - computeAvailabilityZone: 'nova-3' storageAvailabilityZone: 'cinder-3' portTargets: - id: control-plane network: id: 8e4b4e0d-3865-4a9b-a769-559270271242 featureSet: TechPreviewNoUpgrade # ... 6.14.10. Cluster deployment on RHOSP provider networks You can deploy your OpenShift Container Platform clusters on Red Hat OpenStack Platform (RHOSP) with a primary network interface on a provider network. Provider networks are commonly used to give projects direct access to a public network that can be used to reach the internet. You can also share provider networks among projects as part of the network creation process. RHOSP provider networks map directly to an existing physical network in the data center. A RHOSP administrator must create them. In the following example, OpenShift Container Platform workloads are connected to a data center by using a provider network: OpenShift Container Platform clusters that are installed on provider networks do not require tenant networks or floating IP addresses. The installer does not create these resources during installation. Example provider network types include flat (untagged) and VLAN (802.1Q tagged). Note A cluster can support as many provider network connections as the network type allows. For example, VLAN networks typically support up to 4096 connections. You can learn more about provider and tenant networks in the RHOSP documentation . 6.14.10.1. RHOSP provider network requirements for cluster installation Before you install an OpenShift Container Platform cluster, your Red Hat OpenStack Platform (RHOSP) deployment and provider network must meet a number of conditions: The RHOSP networking service (Neutron) is enabled and accessible through the RHOSP networking API. The RHOSP networking service has the port security and allowed address pairs extensions enabled . The provider network can be shared with other tenants. Tip Use the openstack network create command with the --share flag to create a network that can be shared. The RHOSP project that you use to install the cluster must own the provider network, as well as an appropriate subnet. Tip To create a network for a project that is named "openshift," enter the following command USD openstack network create --project openshift To create a subnet for a project that is named "openshift," enter the following command USD openstack subnet create --project openshift To learn more about creating networks on RHOSP, read the provider networks documentation . If the cluster is owned by the admin user, you must run the installer as that user to create ports on the network. Important Provider networks must be owned by the RHOSP project that is used to create the cluster. If they are not, the RHOSP Compute service (Nova) cannot request a port from that network. Verify that the provider network can reach the RHOSP metadata service IP address, which is 169.254.169.254 by default. Depending on your RHOSP SDN and networking service configuration, you might need to provide the route when you create the subnet. For example: USD openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2 ... Optional: To secure the network, create role-based access control (RBAC) rules that limit network access to a single project. 6.14.10.2. Deploying a cluster that has a primary interface on a provider network You can deploy an OpenShift Container Platform cluster that has its primary network interface on an Red Hat OpenStack Platform (RHOSP) provider network. Prerequisites Your Red Hat OpenStack Platform (RHOSP) deployment is configured as described by "RHOSP provider network requirements for cluster installation". Procedure In a text editor, open the install-config.yaml file. Set the value of the platform.openstack.apiVIPs property to the IP address for the API VIP. Set the value of the platform.openstack.ingressVIPs property to the IP address for the Ingress VIP. Set the value of the platform.openstack.machinesSubnet property to the UUID of the provider network subnet. Set the value of the networking.machineNetwork.cidr property to the CIDR block of the provider network subnet. Important The platform.openstack.apiVIPs and platform.openstack.ingressVIPs properties must both be unassigned IP addresses from the networking.machineNetwork.cidr block. Section of an installation configuration file for a cluster that relies on a RHOSP provider network ... platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # ... networking: machineNetwork: - cidr: 192.0.2.0/24 1 2 In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. Warning You cannot set the platform.openstack.externalNetwork or platform.openstack.externalDNS parameters while using a provider network for the primary network interface. When you deploy the cluster, the installer uses the install-config.yaml file to deploy the cluster on the provider network. Tip You can add additional networks, including provider networks, to the platform.openstack.additionalNetworkIDs list. After you deploy your cluster, you can attach pods to additional networks. For more information, see Understanding multiple networks . 6.14.11. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add Neutron ports to the pools when the first pod that is configured to use the dedicated network for pods is created in a namespace. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 6.14.12. Adjusting Kuryr ports pools during installation During installation, you can configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation. Prerequisites Create and modify the install-config.yaml file. Procedure From a command line, create the manifest files: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-network-03-config.yml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-network-* Example output cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml Open the cluster-network-03-config.yml file in an editor, and enter a custom resource (CR) that describes the Cluster Network Operator configuration that you want: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5 1 Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports when the first pod on the network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting this value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . 5 The openStackServiceNetwork parameter defines the CIDR range of the network from which IP addresses are allocated to RHOSP Octavia's LoadBalancers. If this parameter is used with the Amphora driver, Octavia takes two IP addresses from this network for each load balancer: one for OpenShift and the other for VRRP connections. Because these IP addresses are managed by OpenShift Container Platform and Neutron respectively, they must come from different pools. Therefore, the value of openStackServiceNetwork must be at least twice the size of the value of serviceNetwork , and the value of serviceNetwork must overlap entirely with the range that is defined by openStackServiceNetwork . The CNO verifies that VRRP IP addresses that are taken from the range that is defined by this parameter do not overlap with the range that is defined by the serviceNetwork parameter. If this parameter is not set, the CNO uses an expanded value of serviceNetwork that is determined by decrementing the prefix size by 1. Save the cluster-network-03-config.yml file, and exit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory while creating the cluster. 6.14.13. Setting a custom subnet for machines The IP range that the installation program uses by default might not match the Neutron subnet that you create when you install OpenShift Container Platform. If necessary, update the CIDR value for new machines by editing the installation configuration file. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["machineNetwork"] = [{"cidr": "192.168.0.0/18"}]; 1 open(path, "w").write(yaml.dump(data, default_flow_style=False))' 1 Insert a value that matches your intended Neutron subnet, e.g. 192.0.2.0/24 . To set the value manually, open the file and set the value of networking.machineCIDR to something that matches your intended Neutron subnet. 6.14.14. Emptying compute machine pools To proceed with an installation that uses your own infrastructure, set the number of compute machines in the installation configuration file to zero. Later, you create these machines manually. Prerequisites You have the install-config.yaml file that was generated by the OpenShift Container Platform installation program. Procedure On a command line, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["compute"][0]["replicas"] = 0; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set the value of compute.<first entry>.replicas to 0 . 6.14.15. Modifying the network type By default, the installation program selects the OpenShiftSDN network type. To use Kuryr instead, change the value in the installation configuration file that the program generated. Prerequisites You have the file install-config.yaml that was generated by the OpenShift Container Platform installation program Procedure In a command prompt, browse to the directory that contains install-config.yaml . From that directory, either run a script to edit the install-config.yaml file or update the file manually: To set the value by using a script, run: USD python -c ' import yaml; path = "install-config.yaml"; data = yaml.safe_load(open(path)); data["networking"]["networkType"] = "Kuryr"; open(path, "w").write(yaml.dump(data, default_flow_style=False))' To set the value manually, open the file and set networking.networkType to "Kuryr" . 6.15. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Export the metadata file's infraID key as an environment variable: USD export INFRA_ID=USD(jq -r .infraID metadata.json) Tip Extract the infraID key from metadata.json and use it as a prefix for all of the RHOSP resources that you create. By doing so, you avoid name conflicts when making multiple deployments in the same project. 6.16. Preparing the bootstrap Ignition files The OpenShift Container Platform installation process relies on bootstrap machines that are created from a bootstrap Ignition configuration file. Edit the file and upload it. Then, create a secondary bootstrap Ignition configuration file that Red Hat OpenStack Platform (RHOSP) uses to download the primary file. Prerequisites You have the bootstrap Ignition file that the installer program generates, bootstrap.ign . The infrastructure ID from the installer's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see Creating the Kubernetes manifest and Ignition config files . You have an HTTP(S)-accessible way to store the bootstrap Ignition file. The documented procedure uses the RHOSP image service (Glance), but you can also use the RHOSP storage service (Swift), Amazon S3, an internal HTTP server, or an ad hoc Nova server. Procedure Run the following Python script. The script modifies the bootstrap Ignition file to set the hostname and, if available, CA certificate file when it runs: import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f) Using the RHOSP CLI, create an image that uses the bootstrap Ignition file: USD openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name> Get the image's details: USD openstack image show <image_name> Make a note of the file value; it follows the pattern v2/images/<image_ID>/file . Note Verify that the image you created is active. Retrieve the image service's public address: USD openstack catalog show image Combine the public address with the image file value and save the result as the storage location. The location follows the pattern <image_service_public_URL>/v2/images/<image_ID>/file . Generate an auth token and save the token ID: USD openstack token issue -c id -f value Insert the following content into a file called USDINFRA_ID-bootstrap-ignition.json and edit the placeholders to match your own values: { "ignition": { "config": { "merge": [{ "source": "<storage_url>", 1 "httpHeaders": [{ "name": "X-Auth-Token", 2 "value": "<token_ID>" 3 }] }] }, "security": { "tls": { "certificateAuthorities": [{ "source": "data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>" 4 }] } }, "version": "3.2.0" } } 1 Replace the value of ignition.config.merge.source with the bootstrap Ignition file storage URL. 2 Set name in httpHeaders to "X-Auth-Token" . 3 Set value in httpHeaders to your token's ID. 4 If the bootstrap Ignition file server uses a self-signed certificate, include the base64-encoded certificate. Save the secondary Ignition config file. The bootstrap Ignition data will be passed to RHOSP during installation. Warning The bootstrap Ignition file contains sensitive information, like clouds.yaml credentials. Ensure that you store it in a secure place, and delete it after you complete the installation process. 6.17. Creating control plane Ignition config files on RHOSP Installing OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) on your own infrastructure requires control plane Ignition config files. You must create multiple config files. Note As with the bootstrap Ignition configuration, you must explicitly define a hostname for each control plane machine. Prerequisites The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). If the variable is not set, see "Creating the Kubernetes manifest and Ignition config files". Procedure On a command line, run the following Python script: USD for index in USD(seq 0 2); do MASTER_HOSTNAME="USDINFRA_ID-master-USDindex\n" python -c "import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)" <master.ign >"USDINFRA_ID-master-USDindex-ignition.json" done You now have three control plane Ignition files: <INFRA_ID>-master-0-ignition.json , <INFRA_ID>-master-1-ignition.json , and <INFRA_ID>-master-2-ignition.json . 6.18. Creating network resources on RHOSP Create the network resources that an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) installation on your own infrastructure requires. To save time, run supplied Ansible playbooks that generate security groups, networks, subnets, routers, and ports. Prerequisites Python 3 is installed on your machine. You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". Procedure Optional: Add an external network value to the inventory.yaml playbook: Example external network value in the inventory.yaml Ansible playbook ... # The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external' ... Important If you did not provide a value for os_external_network in the inventory.yaml file, you must ensure that VMs can access Glance and an external connection yourself. Optional: Add external network and floating IP (FIP) address values to the inventory.yaml playbook: Example FIP values in the inventory.yaml Ansible playbook ... # OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20' Important If you do not define values for os_api_fip and os_ingress_fip , you must perform postinstallation network configuration. If you do not define a value for os_bootstrap_fip , the installer cannot download debugging information from failed installations. See "Enabling access to the environment" for more information. On a command line, create security groups by running the security-groups.yaml playbook: USD ansible-playbook -i inventory.yaml security-groups.yaml On a command line, create a network, subnet, and router by running the network.yaml playbook: USD ansible-playbook -i inventory.yaml network.yaml Optional: If you want to control the default resolvers that Nova servers use, run the RHOSP CLI command: USD openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> "USDINFRA_ID-nodes" 6.19. Creating the bootstrap machine on RHOSP Create a bootstrap machine and give it the network access it needs to run on Red Hat OpenStack Platform (RHOSP). Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and bootstrap.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml bootstrap.yaml After the bootstrap server is active, view the logs to verify that the Ignition files were received: USD openstack console log show "USDINFRA_ID-bootstrap" 6.20. Creating the control plane machines on RHOSP Create three control plane machines by using the Ignition config files that you generated. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The infrastructure ID from the installation program's metadata file is set as an environment variable ( USDINFRA_ID ). The inventory.yaml , common.yaml , and control-plane.yaml Ansible playbooks are in a common directory. You have the three Ignition files that were created in "Creating control plane Ignition config files". Procedure On a command line, change the working directory to the location of the playbooks. If the control plane Ignition config files aren't already in your working directory, copy them into it. On a command line, run the control-plane.yaml playbook: USD ansible-playbook -i inventory.yaml control-plane.yaml Run the following command to monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete You will see messages that confirm that the control plane machines are running and have joined the cluster: INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete... ... INFO It is now safe to remove the bootstrap resources 6.21. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.22. Deleting bootstrap resources from RHOSP Delete the bootstrap resources that you no longer need. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and down-bootstrap.yaml Ansible playbooks are in a common directory. The control plane machines are running. If you do not know the status of the machines, see "Verifying cluster status". Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the down-bootstrap.yaml playbook: USD ansible-playbook -i inventory.yaml down-bootstrap.yaml The bootstrap port, server, and floating IP address are deleted. Warning If you did not disable the bootstrap Ignition file URL earlier, do so now. 6.23. Creating compute machines on RHOSP After standing up the control plane, create compute machines. Red Hat provides an Ansible playbook that you run to simplify this process. Prerequisites You downloaded the modules in "Downloading playbook dependencies". You downloaded the playbooks in "Downloading the installation playbooks". The inventory.yaml , common.yaml , and compute-nodes.yaml Ansible playbooks are in a common directory. The metadata.json file that the installation program created is in the same directory as the Ansible playbooks. The control plane is active. Procedure On a command line, change the working directory to the location of the playbooks. On a command line, run the playbook: USD ansible-playbook -i inventory.yaml compute-nodes.yaml steps Approve the certificate signing requests for the machines. 6.24. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 6.25. Verifying a successful installation Verify that the OpenShift Container Platform installation is complete. Prerequisites You have the installation program ( openshift-install ) Procedure On a command line, enter: USD openshift-install --log-level debug wait-for install-complete The program outputs the console URL, as well as the administrator's login information. 6.26. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.27. steps Customize your cluster . If necessary, you can opt out of remote health reporting . If you need to enable external access to node ports, configure ingress cluster traffic by using a node port . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses .
[ "sudo openstack quota set --secgroups 250 --secgroup-rules 1000 --ports 1500 --subnets 250 --networks 250 <project>", "(undercloud) USD openstack overcloud container image prepare -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml --namespace=registry.access.redhat.com/rhosp13 --push-destination=<local-ip-from-undercloud.conf>:8787 --prefix=openstack- --tag-from-label {version}-{product-version} --output-env-file=/home/stack/templates/overcloud_images.yaml --output-images-file /home/stack/local_registry_images.yaml", "- imagename: registry.access.redhat.com/rhosp13/openstack-octavia-api:13.0-43 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-health-manager:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-housekeeping:13.0-45 push_destination: <local-ip-from-undercloud.conf>:8787 - imagename: registry.access.redhat.com/rhosp13/openstack-octavia-worker:13.0-44 push_destination: <local-ip-from-undercloud.conf>:8787", "(undercloud) USD sudo openstack overcloud container image upload --config-file /home/stack/local_registry_images.yaml --verbose", "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/octavia.yaml -e octavia_timeouts.yaml", "openstack loadbalancer provider list", "+---------+-------------------------------------------------+ | name | description | +---------+-------------------------------------------------+ | amphora | The Octavia Amphora driver. | | octavia | Deprecated alias of the Octavia Amphora driver. | | ovn | Octavia OVN driver. | +---------+-------------------------------------------------+", "sudo subscription-manager register # If not done already", "sudo subscription-manager attach --pool=USDYOUR_POOLID # If not done already", "sudo subscription-manager repos --disable=* # If not done already", "sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=openstack-16-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-rpms", "sudo yum install python3-openstackclient ansible python3-openstacksdk python3-netaddr ansible-collections-openstack", "sudo alternatives --set python /usr/bin/python3", "xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/common.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/inventory.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-bootstrap.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-compute-nodes.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-control-plane.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-load-balancers.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-network.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-security-groups.yaml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/openstack/down-containers.yaml'", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "file <name_of_downloaded_file>", "openstack image create --container-format=bare --disk-format=qcow2 --file rhcos-USD{RHCOS_VERSION}-openstack.qcow2 rhcos", "openstack network list --long -c ID -c Name -c \"Router Type\"", "+--------------------------------------+----------------+-------------+ | ID | Name | Router Type | +--------------------------------------+----------------+-------------+ | 148a8023-62a7-4672-b018-003462f8d7dc | public_network | External | +--------------------------------------+----------------+-------------+", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"bootstrap machine\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6", "network: name: my-network-1", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 1 networkType: Kuryr 2 platform: openstack: cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 trunkSupport: true 3 octaviaSupport: true 4 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "controlPlane: name: master platform: openstack: type: m1.large failureDomains: - computeAvailabilityZone: 'nova-1' storageAvailabilityZone: 'cinder-1' portTargets: - id: control-plane network: id: 8db6a48e-375b-4caa-b20b-5b9a7218bfe6 - computeAvailabilityZone: 'nova-2' storageAvailabilityZone: 'cinder-2' portTargets: - id: control-plane network: id: 39a7b82a-a8a4-45a4-ba5a-288569a6edd1 - computeAvailabilityZone: 'nova-3' storageAvailabilityZone: 'cinder-3' portTargets: - id: control-plane network: id: 8e4b4e0d-3865-4a9b-a769-559270271242 featureSet: TechPreviewNoUpgrade", "openstack network create --project openshift", "openstack subnet create --project openshift", "openstack subnet create --dhcp --host-route destination=169.254.169.254/32,gateway=192.0.2.2", "platform: openstack: apiVIPs: 1 - 192.0.2.13 ingressVIPs: 2 - 192.0.2.23 machinesSubnet: fa806b2f-ac49-4bce-b9db-124bc64209bf # networking: machineNetwork: - cidr: 192.0.2.0/24", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-network-03-config.yml 1", "ls <installation_directory>/manifests/cluster-network-*", "cluster-network-01-crd.yml cluster-network-02-config.yml cluster-network-03-config.yml", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 openstackServiceNetwork: 172.30.0.0/15 5", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"machineNetwork\"] = [{\"cidr\": \"192.168.0.0/18\"}]; 1 open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"compute\"][0][\"replicas\"] = 0; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "python -c ' import yaml; path = \"install-config.yaml\"; data = yaml.safe_load(open(path)); data[\"networking\"][\"networkType\"] = \"Kuryr\"; open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "export INFRA_ID=USD(jq -r .infraID metadata.json)", "import base64 import json import os with open('bootstrap.ign', 'r') as f: ignition = json.load(f) files = ignition['storage'].get('files', []) infra_id = os.environ.get('INFRA_ID', 'openshift').encode() hostname_b64 = base64.standard_b64encode(infra_id + b'-bootstrap\\n').decode().strip() files.append( { 'path': '/etc/hostname', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + hostname_b64 } }) ca_cert_path = os.environ.get('OS_CACERT', '') if ca_cert_path: with open(ca_cert_path, 'r') as f: ca_cert = f.read().encode() ca_cert_b64 = base64.standard_b64encode(ca_cert).decode().strip() files.append( { 'path': '/opt/openshift/tls/cloud-ca-cert.pem', 'mode': 420, 'contents': { 'source': 'data:text/plain;charset=utf-8;base64,' + ca_cert_b64 } }) ignition['storage']['files'] = files; with open('bootstrap.ign', 'w') as f: json.dump(ignition, f)", "openstack image create --disk-format=raw --container-format=bare --file bootstrap.ign <image_name>", "openstack image show <image_name>", "openstack catalog show image", "openstack token issue -c id -f value", "{ \"ignition\": { \"config\": { \"merge\": [{ \"source\": \"<storage_url>\", 1 \"httpHeaders\": [{ \"name\": \"X-Auth-Token\", 2 \"value\": \"<token_ID>\" 3 }] }] }, \"security\": { \"tls\": { \"certificateAuthorities\": [{ \"source\": \"data:text/plain;charset=utf-8;base64,<base64_encoded_certificate>\" 4 }] } }, \"version\": \"3.2.0\" } }", "for index in USD(seq 0 2); do MASTER_HOSTNAME=\"USDINFRA_ID-master-USDindex\\n\" python -c \"import base64, json, sys; ignition = json.load(sys.stdin); storage = ignition.get('storage', {}); files = storage.get('files', []); files.append({'path': '/etc/hostname', 'mode': 420, 'contents': {'source': 'data:text/plain;charset=utf-8;base64,' + base64.standard_b64encode(b'USDMASTER_HOSTNAME').decode().strip(), 'verification': {}}, 'filesystem': 'root'}); storage['files'] = files; ignition['storage'] = storage json.dump(ignition, sys.stdout)\" <master.ign >\"USDINFRA_ID-master-USDindex-ignition.json\" done", "# The public network providing connectivity to the cluster. If not # provided, the cluster external connectivity must be provided in another # way. # Required for os_api_fip, os_ingress_fip, os_bootstrap_fip. os_external_network: 'external'", "# OpenShift API floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the Control Plane to # serve the OpenShift API. os_api_fip: '203.0.113.23' # OpenShift Ingress floating IP address. If this value is non-empty, the # corresponding floating IP will be attached to the worker nodes to serve # the applications. os_ingress_fip: '203.0.113.19' # If this value is non-empty, the corresponding floating IP will be # attached to the bootstrap machine. This is needed for collecting logs # in case of install failure. os_bootstrap_fip: '203.0.113.20'", "ansible-playbook -i inventory.yaml security-groups.yaml", "ansible-playbook -i inventory.yaml network.yaml", "openstack subnet set --dns-nameserver <server_1> --dns-nameserver <server_2> \"USDINFRA_ID-nodes\"", "ansible-playbook -i inventory.yaml bootstrap.yaml", "openstack console log show \"USDINFRA_ID-bootstrap\"", "ansible-playbook -i inventory.yaml control-plane.yaml", "openshift-install wait-for bootstrap-complete", "INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "ansible-playbook -i inventory.yaml down-bootstrap.yaml", "ansible-playbook -i inventory.yaml compute-nodes.yaml", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "openshift-install --log-level debug wait-for install-complete" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_openstack/installing-openstack-user-kuryr
8.81. kde-settings
8.81. kde-settings 8.81.1. RHBA-2013:1053 - kde-settings bug fix update Updated kde-settings packages that fix one bug are now available. The kde-settings packages provide a rich set of administration panels to configure system and desktop settings in the Konqueror Desktop Environment (KDE). Bug Fix BZ# 886237 The Konqueror browser enabled Java support by default. Because Java is one of the common targets for browser-based malware attacks, Java is now disabled by default in Konqueror. To enable Java in Konqueror, navigate to Settings -> Configure Konqueror -> Java & JavaScript (which sets the path to Java), and select the "Enable Java globally" check box. Users of kde-settings are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/kde-settings
16.11. Sending Synchronization Updates
16.11. Sending Synchronization Updates Synchronization occurs as frequently as is set in the winSyncInterval setting (for retrieving changes from the Active Directory domain) or nsds5replicaupdateschedule setting (for pushing changes from the Directory Server). By default, changes are retrieved from Active Directory every five minutes, and changes from the Directory Server are sent immediately. A sync update can be triggered manually. It is also possible to do a full resynchronization, which sends and pulls every entry in the Directory Server and Active Directory as if it were new. A full resynchronization includes existing Directory Server entries which may not have previously been synchronized. 16.11.1. Performing a Manual Incremental Synchronization During normal operations, all the updates made to entries in the Directory Server that need to be sent to Active Directory are collected to the changelog and then replayed during an incremental update. To manually synchronize the changes: 16.11.2. Performing a Full Synchronization If there have been major changes to data, or synchronization attributes are added to pre-existing Directory Server entries, it is necessary to initiate a resynchronization . Resynchronization is a total update; the entire contents of synchronized subtrees are examined and, if necessary, updated. Resynchronization is done without using the changelog. This is similar to initializing or reinitializing a consumer in replication. 16.11.2.1. Performing a Full Synchronization Using the Command Line To start a full synchronization using the command line: To display the synchronization status: 16.11.2.2. Performing a Full Synchronization Using the Web Console To start a full synchronization: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Replication menu and select the Winsync Agreements entry. Open the Choose Action menu to the synchronization agreement you want to synchronize and select Full Re-Synchronization . Resynchronizing does not delete data on the sync peer. The process sends and receives all updates and adds any new or modified Directory Server entries. For example, the process adds a pre-existing Directory Server user that had the ntUser object class added. To display the synchronization status in the web console: Open the Replication menu. Select the Winsync Agreements entry. If the synchronization completed successfully, the web console displays the Error (0) Replica acquired successfully: Incremental update succeeded message in the Last Update Status column. 16.11.3. Setting Synchronization Schedules Synchronization works two ways. The Directory Server sends its updates to Active Directory on a configurable schedule, similar to replication, using the nsds5replicaupdateschedule attribute. The Directory Server polls the Active Directory to check for changes; the frequency that it checks the Active Directory server is set in the winSyncInterval attribute. By default, the Directory Server update schedule is to always be in sync. The Active Directory interval is to poll the Active Directory every five minutes. To change the schedule the Directory Server uses to send its updates to the Active Directory, edit the nsds5replicaupdateschedule attribute. The schedule is set with start ( SSSS ) and end ( EEEE ) times in the form HHMM , using a 24-hour clock. The days to schedule sync updates are use ranging from 0 (Sunday) to 6 (Saturday). For example, this schedules synchronization to run from noon to 2:00pm on Sunday, Tuesday, Thursday, and Saturday: Note The synchronization times cannot wrap around midnight, so the setting 2300 0100 is not valid. To change how frequently the Directory Server checks the Active Directory for changes to Active Directory entries, reset the winSyncInterval attribute. This attribute is set in seconds, so the default of 300 means that the Directory Server polls the Active Directory server every 300 seconds, or five minutes. Setting this to a higher value can be useful if the directory searches are taking too long and affecting performance. 16.11.4. Changing Synchronization Connections Two aspects of the connection for the sync agreement can be altered: The bind user name and password ( nsDS5ReplicaBindDN and nsDS5ReplicaCredentials ). The connection method ( nsDS5ReplicaTransportInfo ). It is only possible to change the nsDS5ReplicaTransportInfo from LDAP to StartTLS and vice versa. It is not possible to change to or from LDAPS because it is not possible to change the port number, and switching between LDAP and LDAPS requires changing the port number. For example: Warning It is not possible to change the port number of the Active Directory sync peer. Therefore, it is also not possible to switch between standard/STARTTLS connections and TLS connections, since that requires changing between standard and insecure ports. To change to or from TLS, delete the sync agreement and add it again with the updated port number and new transport information. 16.11.5. Handling Entries That Move Out of the Synchronized Subtree The sync agreement defines what subtrees in both Active Directory and Directory Server are synchronized between each other. Entries within the scope (the subtree) are synchronized; other entries are ignored. However, the synchronization process actually starts at the root DN to begin evaluating entries for synchronization. Entries are correlated based on the samAccount in the Active Directory and the uid attribute in Directory Server. The synchronization plug-in notes if an entry (based on the samAccount/uid relationship) is removed from the synchronized subtree either because it is deleted or moved. That is the signal to the synchronization plug-in that the entry is no longer to be synchronized. The issue is that the sync process needs some configuration to determine how to handle that moved entry. There are three options: delete the corresponding entry, ignore the entry (the default), or unsync the entry. Note These sync actions only relate to how to handle on the Directory Server side when an entry is moved out of scope on the Active Directory side. This does not affect any Active Directory entry if an entry is moved out of the synchronized subtree on the Directory Server side. The default behavior in Directory Server 9.0 was to delete the corresponding Directory Server entry. This was true even if the entry on the Active Directory side was never synchronized over to the Directory Server side. Starting in Directory Server 9.1, the default behavior is to ignore the entry and take no action. For example, a user with the samAccount ID of jsmith was created in the ou=Employees subtree on Active Directory. The synchronized subtree is ou=Users , so the jsmith user was never synchronized over to Directory Server. Figure 16.4. Active Directory Tree For 7.x and 8.x versions of Directory Server, synchronization simply ignored that user, since it was outside the synchronized subtree. Starting in Directory Server 9.0, Directory Server began supporting subtree renames - which means that existing entries could be moved between branches of the directory tree. The synchronization plug-in, then, assumes that entries in the Active Directory tree which correspond to a Directory Server user ( samAccount/uid relationship) but are outside the synchronized subtree are intentionally moved outside the synchronized subtree - essentially, a rename operation. The assumption then was that the "corresponding" Directory Server entry should be deleted. Figure 16.5. Active Directory and Directory Server Trees Compared This assumption is not necessarily an accurate one, particularly for user entries which always existed outside the synchronized subtree. The winSyncMoveAction attribute for the synchronization agreement sets instructions on how to handle these moved entries: none takes no action, so if a synchronized Directory Server entry exists, it may be synchronized over to or create an Active Directory entry within scope. If no synchronized Directory Server entry exists, nothing happens at all (this is the default behavior in the Directory Server version 9.1 and later). unsync removes any sync-related attributes ( ntUser or ntGroup ) from the Directory Server entry but otherwise leaves the Directory Server entry intact. Important There is a risk when unsyncing entries that the Active Directory entry may be deleted at a later time, and the Directory Server entry will be left intact. This can create data inconsistency issues, especially if the Directory Server entry is ever used to recreate the entry on the Active Directory side later. delete deletes the corresponding entry on the Directory Server side, regardless of whether it was ever synchronized with Active Directory (this was the default behavior in 9.0). Important You almost never want to delete a Directory Server entry without deleting the corresponding Active Directory entry. This option is available only for compatibility with Directory Server 9.0 systems. If it is necessary to change the default:
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-winsync-agmt poke --suffix=\" dc=example,dc=com \" example-agreement", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-winsync-agmt init --suffix=\" suffix \" agreement_name", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-winsync-agmt init-status --suffix=\" suffix \" agreement_name", "nsds5replicaupdateschedule: SSSS EEEE DDDDDDD", "nsds5replicaupdateschedule: 1200 1400 0246", "winSyncInterval: 1000", "nsDS5ReplicaBindDN: cn=sync user,cn=Users,dc=ad1 nsDS5ReplicaCredentials: {DES}ffGad646dT0nnsT8nJOaMA== nsDS5ReplicaTransportInfo: StartTLS", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-winsync-agmt --move-action=\" action \" --suffix=\" suffix \" agreement_name" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/using_windows_sync-manually_updating_and_resynchronizing
Chapter 3. Quay.io user interface overview
Chapter 3. Quay.io user interface overview The user interface (UI) of Quay.io is a fundamental component that serves as the user's gateway to managing and interacting with container images within the platform's ecosystem. Quay.io's UI is designed to provide an intuitive and user-friendly interface, making it easy for users of all skill levels to navigate and harness Quay.io's features and functionalities. This documentation section aims to introduce users to the key elements and functionalities of Quay.io's UI. It will cover essential aspects such as the UI's layout, navigation, and key features, providing a solid foundation for users to explore and make the most of Quay.io's container registry service. Throughout this documentation, step-by-step instructions, visual aids, and practical examples are provided on the following topics: Exploring applications and repositories Using the Quay.io tutorial Pricing and Quay.io plans Signing in and using Quay.io features Collectively, this document ensures that users can quickly grasp the UI's nuances and successfully navigate their containerization journey with Quay.io. 3.1. Quay.io landing page The Quay.io landing page serves as the central hub for users to access the container registry services offered. This page provides essential information and links to guide users in securely storing, building, and deploying container images effortlessly. The landing page of Quay.io includes links to the following resources: Explore . On this page, you can search the Quay.io database for various applications and repositories. Tutorial . On this page, you can take a step-by-step walkthrough that shows you how to use Quay.io. Pricing . On this page, you can learn about the various pricing tiers offered for Quay.io. There are also various FAQs addressed on this page. Sign in . By clicking this link, you are re-directed to sign into your Quay.io repository. . The landing page also includes information about scheduled maintenance. During scheduled maintenance, Quay.io is operational in read-only mode, and pulls function as normal. Pushes and builds are non-operational during scheduled maintenance. You can subscribe to updates regarding Quay.io maintenance by navigating to Quay.io Status page and clicking Subscribe To Updates . The landing page also includes links to the following resources: Documentation . This page provides documentation for using Quay.io. Terms . This page provides legal information about Red Hat Online Services. Privacy . This page provides information about Red Hat's Privacy Statement. Security . this page provides information about Quay.io security, including SSL/TLS, encryption, passwords, access controls, firewalls, and data resilience. About . This page includes information about packages and projects used and a brief history of the product. Contact . This page includes information about support and contacting the Red Hat Support Team. All Systems Operational . This page includes information the status of Quay.io and a brief history of maintenance. Cookies. By clicking this link, a popup box appears that allows you to set your cookie preferences. . You can also find information about Trying Red Hat Quay on premise or Trying Red Hat Quay on the cloud , which redirects you to the Pricing page. Each option offers a free trial. 3.1.1. Creating a Quay.io account New users of Quay.io are required to both Register for a Red Hat account and create a Quay.io username. These accounts are correlated, with two distinct differences: The Quay.io account can be used to push and pull container images or Open Container Initiative images to Quay.io to store images. The Red Hat account provides users access to the Quay.io user interface. For paying customers, this account can also be used to access images from the Red Hat Ecosystem Catalog , which can be pushed to their Quay.io repository. Users must first register for a Red Hat account, and then create a Quay.io account. Users need both accounts to properly use all features of Quay.io. 3.1.1.1. Registering for a Red Hat Account Use the following procedure to register for a Red Hat account for Quay.io. Procedure Navigate to the Red Hat Customer Portal . In navigation pane, click Log In . When navigated to the log in page, click Register for a Red Hat Account . Enter a Red Hat login ID. Enter a password. Enter the following personal information: First name Last name Email address Phone number Enter the following contact information that is relative to your country or region. For example: Country/region Address Postal code City County Select and agree to Red Hat's terms and conditions. Click Create my account . Navigate to Quay.io and log in. 3.1.1.2. Creating a Quay.io user account Use the following procedure to create a Quay.io user account. Prerequisites You have created a Red Hat account. Procedure If required, resolve the captcha by clicking I am not a robot and confirming. You are redirected to a Confirm Username page. On the Confirm Username page, enter a username. By default, a username is generated. If the same username already exists, a number is added at the end to make it unique. This username is be used as a namespace in the Quay Container Registry. After deciding on a username, click Confirm Username . You are redirected to the Quay.io Repositories page, which serves as a dedicated hub where users can access and manage their repositories with ease. From this page, users can efficiently organize, navigate, and interact with their container images and related resources. 3.1.1.3. Quay.io Single Sign On support Red Hat Single Sign On (SSO) can be used with Quay.io. Use the following procedure to set up Red Hat SSO with Quay.io. For most users, these accounts are already linked. However, for some legacy Quay.io users, this procedure might be required. Prerequisites You have created a Quay.io account. Procedure Navigate to to the Quay.io Recovery page . Enter your username and password, then click Sign in to Quay Container Registry . In the navigation pane, click your username Account Settings . In the navigation pane, click External Logins and Applications . Click Attach to Red Hat . If you are already signed into Red Hat SSO, your account is automatically linked. Otherwise, you are prompted to sign into Red Hat SSO by entering your Red Hat login or email, and the password. Alternatively, you might need to create a new account first. After signing into Red Hat SSO, you can choose to authenticate against Quay.io using your Red Hat account from the login page. Additional resources For more information, see Quay.io Now Supports Red Hat Single Sign On . 3.1.2. Exploring Quay.io The Quay.io Explore page is a valuable hub that allows users to delve into a vast collection of container images, applications, and repositories shared by the Quay.io community. With its intuitive and user-friendly design, the Explore page offers a powerful search function, enabling users to effortlessly discover containerized applications and resources. 3.1.3. Trying Quay.io (deprecated) Note The Red Hat Quay tutorial is currently deprecated and will be removed when the v2 UI goes generally available (GA). The Quay.io Tutorial page offers users and introduction to the Quay.io container registry service. By clicking Continue Tutorial users learn how to perform the following features on Quay.io: Logging into Quay Container Registry from the Docker CLI Starting a container Creating images from a container Pushing a repository to Quay Container Registry Viewing a repository Setting up build triggers Changing a repository's permissions 3.1.4. Information about Quay.io pricing In addition to a free tier, Quay.io also offers several paid plans that have enhanced benefits. The Quay.io Pricing page offers information about Quay.io plans and the associated prices of each plan. The cost of each tier can be found on the Pricing page. All Quay.io plans include the following benefits: Continuous integration Public repositories Robot accounts Teams SSL/TLS encryption Logging and auditing Invoice history Quay.io subscriptions are handled by the Stripe payment processing platform. A valid credit card is required to sign up for Quay.io. To sign up for Quay.io, use the following procedure. Procedure Navigate to the Quay.io Pricing page . Decide on a plan, for example, Small , and click Buy Now . You are redirected to the Create New Organization page. Enter the following information: Organization Name Organization Email Optional. You can select a different plan if you want a plan larger, than, for example, Small . Resolve that captcha, and select Create Organization . You are redirected to Stripe. Enter the following information: Card information , including MM/YY and the CVC Name on card Country or region ZIP (if applicable) Check the box if you want your information to be saved. Phone Number Click Subscribe after all boxes have been filled.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/about_quay_io/quayio-ui-overview
Chapter 49. TlsSidecar schema reference
Chapter 49. TlsSidecar schema reference Used in: CruiseControlSpec , EntityOperatorSpec Full list of TlsSidecar schema properties Configures a TLS sidecar, which is a container that runs in a pod, but serves a supporting purpose. In AMQ Streams, the TLS sidecar uses TLS to encrypt and decrypt communication between components and ZooKeeper. The TLS sidecar is used in the Entity Operator. The TLS sidecar is configured using the tlsSidecar property in Kafka.spec.entityOperator . The TLS sidecar supports the following additional options: image resources logLevel readinessProbe livenessProbe The resources property specifies the memory and CPU resources allocated for the TLS sidecar. The image property configures the container image which will be used. The readinessProbe and livenessProbe properties configure healthcheck probes for the TLS sidecar. The logLevel property specifies the logging level. The following logging levels are supported: emerg alert crit err warning notice info debug The default value is notice . Example TLS sidecar configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # ... entityOperator: # ... tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi # ... 49.1. TlsSidecar schema properties Property Description image The docker image for the container. string livenessProbe Pod liveness checking. Probe logLevel The log level for the TLS sidecar. Default value is notice . string (one of [emerg, debug, crit, err, alert, warning, notice, info]) readinessProbe Pod readiness checking. Probe resources CPU and memory resources to reserve. For more information, see the external documentation for core/v1 resourcerequirements . ResourceRequirements
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: # tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-tlssidecar-reference
1.4. Changing the Data Warehouse Sampling Scale
1.4. Changing the Data Warehouse Sampling Scale Data Warehouse is required in Red Hat Virtualization. It can be installed and configured on the same machine as the Manager, or on a separate machine with access to the Manager. The default data retention settings may not be required for all setups, so engine-setup offers two data sampling scales: Basic and Full . Full uses the default values for the data retention settings listed in Section 2.4, "Application Settings for the Data Warehouse service in ovirt-engine-dwhd.conf" (recommended when Data Warehouse is installed on a remote host). Basic reduces the values of DWH_TABLES_KEEP_HOURLY to 720 and DWH_TABLES_KEEP_DAILY to 0 , easing the load on the Manager machine. Use Basic when the Manager and Data Warehouse are installed on the same machine. The sampling scale is configured by engine-setup during installation: You can change the sampling scale later by running engine-setup again with the --reconfigure-dwh-scale option. Changing the Data Warehouse Sampling Scale You can also adjust individual data retention settings if necessary, as documented in Section 2.4, "Application Settings for the Data Warehouse service in ovirt-engine-dwhd.conf" .
[ "--== MISC CONFIGURATION ==-- Please choose Data Warehouse sampling scale: (1) Basic (2) Full (1, 2)[1]:", "engine-setup --reconfigure-dwh-scale [...] Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]: [...] Perform full vacuum on the oVirt engine history database ovirt_engine_history@localhost? This operation may take a while depending on this setup health and the configuration of the db vacuum process. See https://www.postgresql.org/docs/9.0/static/sql-vacuum.html (Yes, No) [No]: [...] Setup can backup the existing database. The time and space required for the database backup depend on its size. This process takes time, and in some cases (for instance, when the size is few GBs) may take several hours to complete. If you choose to not back up the database, and Setup later fails for some reason, it will not be able to restore the database and all DWH data will be lost. Would you like to backup the existing database before upgrading it? (Yes, No) [Yes]: [...] Please choose Data Warehouse sampling scale: (1) Basic (2) Full (1, 2)[1]: 2 [...] During execution engine service will be stopped (OK, Cancel) [OK]: [...] Please confirm installation settings (OK, Cancel) [OK]:" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/data_warehouse_guide/changing_the_data_warehouse_sampling_scale
End-user Guide
End-user Guide Red Hat CodeReady Workspaces 2.1 Using Red Hat CodeReady Workspaces 2.1 Supriya Takkhi Robert Kratky [email protected] Michal Maler [email protected] Fabrice Flore-Thebault [email protected] Yana Hontyk [email protected] Red Hat Developer Group Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/end-user_guide/index
Chapter 4. Additional resources
Chapter 4. Additional resources "Executing rules" in Designing a decision service using DRL rules Interacting with Red Hat Process Automation Manager using KIE APIs Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 using Operators Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 3 using templates
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/additional_resources
Chapter 5. Using AMQ Streams Operators
Chapter 5. Using AMQ Streams Operators Use the AMQ Streams operators to manage your Kafka cluster, and Kafka topics and users. 5.1. Using the Cluster Operator The Cluster Operator is used to deploy a Kafka cluster and other Kafka components. The Cluster Operator is deployed using YAML installation files. Note On OpenShift, a Kafka Connect deployment can incorporate a Source2Image feature to provide a convenient way to add additional connectors. Additional resources Deploying the Cluster Operator in the Deploying and Upgrading AMQ Streams on OpenShift guide. Kafka Cluster configuration . 5.1.1. Cluster Operator configuration You can configure the Cluster Operator using supported environment variables, and through its logging configuration. The environment variables relate to container configuration for the deployment of the Cluster Operator image. For more information on image configuration, see, Section 13.1.6, " image " . STRIMZI_NAMESPACE A comma-separated list of namespaces that the operator should operate in. When not set, set to empty string, or set to * , the Cluster Operator will operate in all namespaces. The Cluster Operator deployment might use the OpenShift Downward API to set this automatically to the namespace the Cluster Operator is deployed in. Example configuration for Cluster Operator namespaces env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace STRIMZI_FULL_RECONCILIATION_INTERVAL_MS Optional, default is 120000 ms. The interval between periodic reconciliations, in milliseconds. STRIMZI_OPERATION_TIMEOUT_MS Optional, default 300000 ms. The timeout for internal operations, in milliseconds. This value should be increased when using AMQ Streams on clusters where regular OpenShift operations take longer than usual (because of slow downloading of Docker images, for example). STRIMZI_OPERATOR_NAMESPACE The name of the namespace where the AMQ Streams Cluster Operator is running. Do not configure this variable manually. Use the OpenShift Downward API. env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace STRIMZI_OPERATOR_NAMESPACE_LABELS Optional. The labels of the namespace where the AMQ Streams Cluster Operator is running. Namespace labels are used to configure the namespace selector in network policies to allow the AMQ Streams Cluster Operator to only have access to the operands from the namespace with these labels. When not set, the namespace selector in network policies is configured to allow access to the AMQ Streams Cluster Operator from any namespace in the OpenShift cluster. env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 STRIMZI_CUSTOM_RESOURCE_SELECTOR Optional. Specifies label selector used to filter the custom resources handled by the operator. The operator will operate only on those custom resources which will have the specified labels set. Resources without these labels will not be seen by the operator. The label selector applies to Kafka , KafkaConnect , KafkaConnectS2I , KafkaBridge , KafkaMirrorMaker , and KafkaMirrorMaker2 resources. KafkaRebalance and KafkaConnector resources will be operated only when their corresponding Kafka and Kafka Connect clusters have the matching labels. env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2 STRIMZI_KAFKA_IMAGES Required. This provides a mapping from Kafka version to the corresponding Docker image containing a Kafka broker of that version. The required syntax is whitespace or comma separated <version> = <image> pairs. For example 2.6.0=registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.7.0, 2.7.0=registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 . This is used when a Kafka.spec.kafka.version property is specified but not the Kafka.spec.kafka.image in the Kafka resource. STRIMZI_DEFAULT_KAFKA_INIT_IMAGE Optional, default registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 . The image name to use as default for the init container started before the broker for initial configuration work (that is, rack support), if no image is specified as the kafka-init-image in the Kafka resource. STRIMZI_KAFKA_CONNECT_IMAGES Required. This provides a mapping from the Kafka version to the corresponding Docker image containing a Kafka connect of that version. The required syntax is whitespace or comma separated <version> = <image> pairs. For example 2.6.0=registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.7.0, 2.7.0=registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 . This is used when a KafkaConnect.spec.version property is specified but not the KafkaConnect.spec.image . STRIMZI_KAFKA_CONNECT_S2I_IMAGES Required. This provides a mapping from the Kafka version to the corresponding Docker image containing a Kafka connect of that version. The required syntax is whitespace or comma separated <version> = <image> pairs. For example 2.6.0=registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.7.0, 2.7.0=registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 . This is used when a KafkaConnectS2I.spec.version property is specified but not the KafkaConnectS2I.spec.image . STRIMZI_KAFKA_MIRROR_MAKER_IMAGES Required. This provides a mapping from the Kafka version to the corresponding Docker image containing a Kafka mirror maker of that version. The required syntax is whitespace or comma separated <version> = <image> pairs. For example 2.6.0=registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.7.0, 2.7.0=registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 . This is used when a KafkaMirrorMaker.spec.version property is specified but not the KafkaMirrorMaker.spec.image . STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE Optional, default registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 . The image name to use as the default when deploying the topic operator, if no image is specified as the Kafka.spec.entityOperator.topicOperator.image in Kafka resource. STRIMZI_DEFAULT_USER_OPERATOR_IMAGE Optional, default registry.redhat.io/amq7/amq-streams-rhel7-operator:1.7.0 . The image name to use as the default when deploying the user operator, if no image is specified as the Kafka.spec.entityOperator.userOperator.image in the Kafka resource. STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE Optional, default registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 . The image name to use as the default when deploying the sidecar container which provides TLS support for the Entity Operator, if no image is specified as the Kafka.spec.entityOperator.tlsSidecar.image in the Kafka resource. STRIMZI_IMAGE_PULL_POLICY Optional. The ImagePullPolicy which will be applied to containers in all pods managed by AMQ Streams Cluster Operator. The valid values are Always , IfNotPresent , and Never . If not specified, the OpenShift defaults will be used. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_IMAGE_PULL_SECRETS Optional. A comma-separated list of Secret names. The secrets referenced here contain the credentials to the container registries where the container images are pulled from. The secrets are used in the imagePullSecrets field for all Pods created by the Cluster Operator. Changing this list results in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_KUBERNETES_VERSION Optional. Overrides the Kubernetes version information detected from the API server. Example configuration for Kubernetes version override env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64 KUBERNETES_SERVICE_DNS_DOMAIN Optional. Overrides the default OpenShift DNS domain name suffix. By default, services assigned in the OpenShift cluster have a DNS domain name that uses the default suffix cluster.local . For example, for broker kafka-0 : <cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc. cluster.local The DNS domain name is added to the Kafka broker certificates used for hostname verification. If you are using a different DNS domain name suffix in your cluster, change the KUBERNETES_SERVICE_DNS_DOMAIN environment variable from the default to the one you are using in order to establish a connection with the Kafka brokers. STRIMZI_CONNECT_BUILD_TIMEOUT_MS Optional, default 300000 ms. The timeout for building new Kafka Connect images with additional connectots, in milliseconds. This value should be increased when using AMQ Streams to build container images containing many connectors or using a slow container registry. 5.1.1.1. Logging configuration by ConfigMap The Cluster Operator's logging is configured by the strimzi-cluster-operator ConfigMap . A ConfigMap containing logging configuration is created when installing the Cluster Operator. This ConfigMap is described in the file install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml . You configure Cluster Operator logging by changing the data field log4j2.properties in this ConfigMap . To update the logging configuration, you can edit the 050-ConfigMap-strimzi-cluster-operator.yaml file and then run the following command: oc create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml Alternatively, edit the ConfigMap directly: oc edit cm strimzi-cluster-operator To change the frequency of the reload interval, set a time in seconds in the monitorInterval option in the created ConfigMap . If the ConfigMap is missing when the Cluster Operator is deployed, the default logging values are used. If the ConfigMap is accidentally deleted after the Cluster Operator is deployed, the most recently loaded logging configuration is used. Create a new ConfigMap to load a new logging configuration. Note Do not remove the monitorInterval option from the ConfigMap. 5.1.1.2. Restricting Cluster Operator access with network policy The Cluster Operator can run in the same namespace as the resources it manages, or in a separate namespace. By default, the STRIMZI_OPERATOR_NAMESPACE environment variable is configured to use the OpenShift Downward API to find which namespace the Cluster Operator is running in. If the Cluster Operator is running in the same namespace as the resources, only local access is required, and allowed by AMQ Streams. If the Cluster Operator is running in a separate namespace to the resources it manages, any namespace in the OpenShift cluster is allowed access to the Cluster Operator unless network policy is configured. Use the optional STRIMZI_OPERATOR_NAMESPACE_LABELS environment variable to establish network policy for the Cluster Operator using namespace labels. By adding namespace labels, access to the Cluster Operator is restricted to the namespaces specified. Network policy configured for the Cluster Operator deployment #... env: # ... - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 #... 5.1.1.3. Periodic reconciliation Although the Cluster Operator reacts to all notifications about the desired cluster resources received from the OpenShift cluster, if the operator is not running, or if a notification is not received for any reason, the desired resources will get out of sync with the state of the running OpenShift cluster. In order to handle failovers properly, a periodic reconciliation process is executed by the Cluster Operator so that it can compare the state of the desired resources with the current cluster deployments in order to have a consistent state across all of them. You can set the time interval for the periodic reconciliations using the [STRIMZI_FULL_RECONCILIATION_INTERVAL_MS] variable. 5.1.2. Provisioning Role-Based Access Control (RBAC) For the Cluster Operator to function it needs permission within the OpenShift cluster to interact with resources such as Kafka , KafkaConnect , and so on, as well as the managed resources, such as ConfigMaps , Pods , Deployments , StatefulSets and Services . Such permission is described in terms of OpenShift role-based access control (RBAC) resources: ServiceAccount , Role and ClusterRole , RoleBinding and ClusterRoleBinding . In addition to running under its own ServiceAccount with a ClusterRoleBinding , the Cluster Operator manages some RBAC resources for the components that need access to OpenShift resources. OpenShift also includes privilege escalation protections that prevent components operating under one ServiceAccount from granting other ServiceAccounts privileges that the granting ServiceAccount does not have. Because the Cluster Operator must be able to create the ClusterRoleBindings , and RoleBindings needed by resources it manages, the Cluster Operator must also have those same privileges. 5.1.2.1. Delegated privileges When the Cluster Operator deploys resources for a desired Kafka resource it also creates ServiceAccounts , RoleBindings , and ClusterRoleBindings , as follows: The Kafka broker pods use a ServiceAccount called cluster-name -kafka When the rack feature is used, the strimzi- cluster-name -kafka-init ClusterRoleBinding is used to grant this ServiceAccount access to the nodes within the cluster via a ClusterRole called strimzi-kafka-broker When the rack feature is not used no binding is created The ZooKeeper pods use a ServiceAccount called cluster-name -zookeeper The Entity Operator pod uses a ServiceAccount called cluster-name -entity-operator The Topic Operator produces OpenShift events with status information, so the ServiceAccount is bound to a ClusterRole called strimzi-entity-operator which grants this access via the strimzi-entity-operator RoleBinding The pods for KafkaConnect and KafkaConnectS2I resources use a ServiceAccount called cluster-name -cluster-connect The pods for KafkaMirrorMaker use a ServiceAccount called cluster-name -mirror-maker The pods for KafkaMirrorMaker2 use a ServiceAccount called cluster-name -mirrormaker2 The pods for KafkaBridge use a ServiceAccount called cluster-name -bridge 5.1.2.2. ServiceAccount The Cluster Operator is best run using a ServiceAccount : Example ServiceAccount for the Cluster Operator apiVersion: v1 kind: ServiceAccount metadata: name: strimzi-cluster-operator labels: app: strimzi The Deployment of the operator then needs to specify this in its spec.template.spec.serviceAccountName : Partial example of Deployment for the Cluster Operator apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: strimzi-cluster-operator strimzi.io/kind: cluster-operator template: # ... Note line 12, where the strimzi-cluster-operator ServiceAccount is specified as the serviceAccountName . 5.1.2.3. ClusterRoles The Cluster Operator needs to operate using ClusterRoles that gives access to the necessary resources. Depending on the OpenShift cluster setup, a cluster administrator might be needed to create the ClusterRoles . Note Cluster administrator rights are only needed for the creation of the ClusterRoles . The Cluster Operator will not run under the cluster admin account. The ClusterRoles follow the principle of least privilege and contain only those privileges needed by the Cluster Operator to operate Kafka, Kafka Connect, and ZooKeeper clusters. The first set of assigned privileges allow the Cluster Operator to manage OpenShift resources such as StatefulSets , Deployments , Pods , and ConfigMaps . Cluster Operator uses ClusterRoles to grant permission at the namespace-scoped resources level and cluster-scoped resources level: ClusterRole with namespaced resources for the Cluster Operator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-namespaced labels: app: strimzi rules: - apiGroups: - "rbac.authorization.k8s.io" resources: # The cluster operator needs to access and manage rolebindings to grant Strimzi components cluster permissions - rolebindings verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "rbac.authorization.k8s.io" resources: # The cluster operator needs to access and manage roles to grant the entity operator permissions - roles verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "" resources: # The cluster operator needs to access and delete pods, this is to allow it to monitor pod health and coordinate rolling updates - pods # The cluster operator needs to access and manage service accounts to grant Strimzi components cluster permissions - serviceaccounts # The cluster operator needs to access and manage config maps for Strimzi components configuration - configmaps # The cluster operator needs to access and manage services and endpoints to expose Strimzi components to network traffic - services - endpoints # The cluster operator needs to access and manage secrets to handle credentials - secrets # The cluster operator needs to access and manage persistent volume claims to bind them to Strimzi components for persistent data - persistentvolumeclaims verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "kafka.strimzi.io" resources: # The cluster operator runs the KafkaAssemblyOperator, which needs to access and manage Kafka resources - kafkas - kafkas/status # The cluster operator runs the KafkaConnectAssemblyOperator, which needs to access and manage KafkaConnect resources - kafkaconnects - kafkaconnects/status # The cluster operator runs the KafkaConnectS2IAssemblyOperator, which needs to access and manage KafkaConnectS2I resources - kafkaconnects2is - kafkaconnects2is/status # The cluster operator runs the KafkaConnectorAssemblyOperator, which needs to access and manage KafkaConnector resources - kafkaconnectors - kafkaconnectors/status # The cluster operator runs the KafkaMirrorMakerAssemblyOperator, which needs to access and manage KafkaMirrorMaker resources - kafkamirrormakers - kafkamirrormakers/status # The cluster operator runs the KafkaBridgeAssemblyOperator, which needs to access and manage BridgeMaker resources - kafkabridges - kafkabridges/status # The cluster operator runs the KafkaMirrorMaker2AssemblyOperator, which needs to access and manage KafkaMirrorMaker2 resources - kafkamirrormaker2s - kafkamirrormaker2s/status # The cluster operator runs the KafkaRebalanceAssemblyOperator, which needs to access and manage KafkaRebalance resources - kafkarebalances - kafkarebalances/status verbs: - get - list - watch - create - delete - patch - update - apiGroups: # The cluster operator needs the extensions api as the operator supports Kubernetes version 1.11+ # apps/v1 was introduced in Kubernetes 1.14 - "extensions" resources: # The cluster operator needs to access and manage deployments to run deployment based Strimzi components - deployments - deployments/scale # The cluster operator needs to access replica sets to manage Strimzi components and to determine error states - replicasets # The cluster operator needs to access and manage replication controllers to manage replicasets - replicationcontrollers # The cluster operator needs to access and manage network policies to lock down communication between Strimzi components - networkpolicies # The cluster operator needs to access and manage ingresses which allow external access to the services in a cluster - ingresses verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "apps" resources: # The cluster operator needs to access and manage deployments to run deployment based Strimzi components - deployments - deployments/scale - deployments/status # The cluster operator needs to access and manage stateful sets to run stateful sets based Strimzi components - statefulsets # The cluster operator needs to access replica-sets to manage Strimzi components and to determine error states - replicasets verbs: - get - list - watch - create - delete - patch - update - apiGroups: - "" resources: # The cluster operator needs to be able to create events and delegate permissions to do so - events verbs: - create - apiGroups: # OpenShift S2I requirements - apps.openshift.io resources: - deploymentconfigs - deploymentconfigs/scale - deploymentconfigs/status - deploymentconfigs/finalizers verbs: - get - list - watch - create - delete - patch - update - apiGroups: # OpenShift S2I requirements - build.openshift.io resources: - buildconfigs - buildconfigs/instantiate - builds verbs: - get - list - watch - create - delete - patch - update - apiGroups: # OpenShift S2I requirements - image.openshift.io resources: - imagestreams - imagestreams/status verbs: - get - list - watch - create - delete - patch - update - apiGroups: - networking.k8s.io resources: # The cluster operator needs to access and manage network policies to lock down communication between Strimzi components - networkpolicies # The cluster operator needs to access and manage ingresses which allow external access to the services in a cluster - ingresses verbs: - get - list - watch - create - delete - patch - update - apiGroups: - route.openshift.io resources: # The cluster operator needs to access and manage routes to expose Strimzi components for external access - routes - routes/custom-host verbs: - get - list - watch - create - delete - patch - update - apiGroups: - policy resources: # The cluster operator needs to access and manage pod disruption budgets this limits the number of concurrent disruptions # that a Strimzi component experiences, allowing for higher availability - poddisruptionbudgets verbs: - get - list - watch - create - delete - patch - update The second includes the permissions needed for cluster-scoped resources. ClusterRole with cluster-scoped resources for the Cluster Operator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-global labels: app: strimzi rules: - apiGroups: - "rbac.authorization.k8s.io" resources: # The cluster operator needs to create and manage cluster role bindings in the case of an install where a user # has specified they want their cluster role bindings generated - clusterrolebindings verbs: - get - list - watch - create - delete - patch - update - apiGroups: - storage.k8s.io resources: # The cluster operator requires "get" permissions to view storage class details # This is because only a persistent volume of a supported storage class type can be resized - storageclasses verbs: - get - apiGroups: - "" resources: # The cluster operator requires "list" permissions to view all nodes in a cluster # The listing is used to determine the node addresses when NodePort access is configured # These addresses are then exposed in the custom resource states - nodes verbs: - list The strimzi-kafka-broker ClusterRole represents the access needed by the init container in Kafka pods that is used for the rack feature. As described in the Delegated privileges section, this role is also needed by the Cluster Operator in order to be able to delegate this access. ClusterRole for the Cluster Operator allowing it to delegate access to OpenShift nodes to the Kafka broker pods apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-broker labels: app: strimzi rules: - apiGroups: - "" resources: # The Kafka Brokers require "get" permissions to view the node they are on # This information is used to generate a Rack ID that is used for High Availability configurations - nodes verbs: - get The strimzi-topic-operator ClusterRole represents the access needed by the Topic Operator. As described in the Delegated privileges section, this role is also needed by the Cluster Operator in order to be able to delegate this access. ClusterRole for the Cluster Operator allowing it to delegate access to events to the Topic Operator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-entity-operator labels: app: strimzi rules: - apiGroups: - "kafka.strimzi.io" resources: # The entity operator runs the KafkaTopic assembly operator, which needs to access and manage KafkaTopic resources - kafkatopics - kafkatopics/status # The entity operator runs the KafkaUser assembly operator, which needs to access and manage KafkaUser resources - kafkausers - kafkausers/status verbs: - get - list - watch - create - patch - update - delete - apiGroups: - "" resources: - events verbs: # The entity operator needs to be able to create events - create - apiGroups: - "" resources: # The entity operator user-operator needs to access and manage secrets to store generated credentials - secrets verbs: - get - list - watch - create - delete - patch - update The strimzi-kafka-client ClusterRole represents the access needed by the components based on Kafka clients which use the client rack-awareness. As described in the Delegated privileges section, this role is also needed by the Cluster Operator in order to be able to delegate this access. ClusterRole for the Cluster Operator allowing it to delegate access to OpenShift nodes to the Kafka client based pods apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-client labels: app: strimzi rules: - apiGroups: - "" resources: # The Kafka clients (Connect, Mirror Maker, etc.) require "get" permissions to view the node they are on # This information is used to generate a Rack ID (client.rack option) that is used for consuming from the closest # replicas when enabled - nodes verbs: - get 5.1.2.4. ClusterRoleBindings The operator needs ClusterRoleBindings and RoleBindings which associates its ClusterRole with its ServiceAccount : ClusterRoleBindings are needed for ClusterRoles containing cluster-scoped resources. Example ClusterRoleBinding for the Cluster Operator apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-global apiGroup: rbac.authorization.k8s.io ClusterRoleBindings are also needed for the ClusterRoles needed for delegation: Example ClusterRoleBinding for the Cluster Operator for the Kafka broker rack-awarness apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-broker-delegation labels: app: strimzi # The Kafka broker cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Kafka brokers. # This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-broker apiGroup: rbac.authorization.k8s.io and Example ClusterRoleBinding for the Cluster Operator for the Kafka client rack-awarness apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-client-delegation labels: app: strimzi # The Kafka clients cluster role must be bound to the cluster operator service account so that it can delegate the # cluster role to the Kafka clients using it for consuming from closest replica. # This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-client apiGroup: rbac.authorization.k8s.io ClusterRoles containing only namespaced resources are bound using RoleBindings only. apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-namespaced apiGroup: rbac.authorization.k8s.io apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-entity-operator-delegation labels: app: strimzi # The Entity Operator cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Entity Operator. # This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-entity-operator apiGroup: rbac.authorization.k8s.io 5.2. Using the Topic Operator When you create, modify or delete a topic using the KafkaTopic resource, the Topic Operator ensures those changes are reflected in the Kafka cluster. The Deploying and Upgrading AMQ Streams on OpenShift guide provides instructions to deploy the Topic Operator: Using the Cluster Operator (recommended) Standalone to operate with Kafka clusters not managed by AMQ Streams 5.2.1. Kafka topic resource The KafkaTopic resource is used to configure topics, including the number of partitions and replicas. The full schema for KafkaTopic is described in KafkaTopic schema reference . 5.2.1.1. Identifying a Kafka cluster for topic handling A KafkaTopic resource includes a label that defines the appropriate name of the Kafka cluster (derived from the name of the Kafka resource) to which it belongs. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: topic-name-1 labels: strimzi.io/cluster: my-cluster The label is used by the Topic Operator to identify the KafkaTopic resource and create a new topic, and also in subsequent handling of the topic. If the label does not match the Kafka cluster, the Topic Operator cannot identify the KafkaTopic and the topic is not created. 5.2.1.2. Kafka topic usage recommendations When working with topics, be consistent. Always operate on either KafkaTopic resources or topics directly in OpenShift. Avoid routinely switching between both methods for a given topic. Use topic names that reflect the nature of the topic, and remember that names cannot be changed later. If creating a topic in Kafka, use a name that is a valid OpenShift resource name, otherwise the Topic Operator will need to create the corresponding KafkaTopic with a name that conforms to the OpenShift rules. Note Recommendations for identifiers and names in OpenShift are outlined in Identifiers and Names in OpenShift community article. 5.2.1.3. Kafka topic naming conventions Kafka and OpenShift impose their own validation rules for the naming of topics in Kafka and KafkaTopic.metadata.name respectively. There are valid names for each which are invalid in the other. Using the spec.topicName property, it is possible to create a valid topic in Kafka with a name that would be invalid for the Kafka topic in OpenShift. The spec.topicName property inherits Kafka naming validation rules: The name must not be longer than 249 characters. Valid characters for Kafka topics are ASCII alphanumerics, . , _ , and - . The name cannot be . or .. , though . can be used in a name, such as exampleTopic. or .exampleTopic . spec.topicName must not be changed. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: topic-name-1 spec: topicName: topicName-1 1 # ... 1 Upper case is invalid in OpenShift. cannot be changed to: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: topic-name-1 spec: topicName: name-2 # ... Note Some Kafka client applications, such as Kafka Streams, can create topics in Kafka programmatically. If those topics have names that are invalid OpenShift resource names, the Topic Operator gives them a valid metadata.name based on the Kafka name. Invalid characters are replaced and a hash is appended to the name. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: mytopic---c55e57fe2546a33f9e603caf57165db4072e827e spec: topicName: myTopic # ... 5.2.2. Topic Operator topic store The Topic Operator uses Kafka to store topic metadata describing topic configuration as key-value pairs. The topic store is based on the Kafka Streams key-value mechanism, which uses Kafka topics to persist the state. Topic metadata is cached in-memory and accessed locally within the Topic Operator. Updates from operations applied to the local in-memory cache are persisted to a backup topic store on disk. The topic store is continually synchronized with updates from Kafka topics or OpenShift KafkaTopic custom resources. Operations are handled rapidly with the topic store set up this way, but should the in-memory cache crash it is automatically repopulated from the persistent storage. 5.2.2.1. Internal topic store topics Internal topics support the handling of topic metadata in the topic store. __strimzi_store_topic Input topic for storing the topic metadata __strimzi-topic-operator-kstreams-topic-store-changelog Retains a log of compacted topic store values Warning Do not delete these topics, as they are essential to the running of the Topic Operator. 5.2.2.2. Migrating topic metadata from ZooKeeper In releases of AMQ Streams, topic metadata was stored in ZooKeeper. The new process removes this requirement, bringing the metadata into the Kafka cluster, and under the control of the Topic Operator. When upgrading to AMQ Streams 1.7, the transition to Topic Operator control of the topic store is seamless. Metadata is found and migrated from ZooKeeper, and the old store is deleted. 5.2.2.3. Downgrading to an AMQ Streams version that uses ZooKeeper to store topic metadata If you are reverting back to a version of AMQ Streams earlier than 0.22, which uses ZooKeeper for the storage of topic metadata, you still downgrade your Cluster Operator to the version, then downgrade Kafka brokers and client applications to the Kafka version as standard. However, you must also delete the topics that were created for the topic store using a kafka-admin command, specifying the bootstrap address of the Kafka cluster. For example: oc run kafka-admin -ti --image=registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete The command must correspond to the type of listener and authentication used to access the Kafka cluster. The Topic Operator will reconstruct the ZooKeeper topic metadata from the state of the topics in Kafka. 5.2.2.4. Topic Operator topic replication and scaling The recommended configuration for topics managed by the Topic Operator is a topic replication factor of 3, and a minimum of 2 in-sync replicas. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 1 replicas: 3 2 config: min.insync.replicas=2 3 #... 1 The number of partitions for the topic. Generally, 1 partition is sufficient. 2 The number of replica topic partitions. Currently, this cannot be changed in the KafkaTopic resource, but it can be changed using the kafka-reassign-partitions.sh tool. 3 The minimum number of replica partitions that a message must be successfully written to, or an exception is raised. Note In-sync replicas are used in conjunction with the acks configuration for producer applications. The acks configuration determines the number of follower partitions a message must be replicated to before the message is acknowledged as successfully received. The Topic Operator runs with acks=all , whereby messages must be acknowledged by all in-sync replicas. When scaling Kafka clusters by adding or removing brokers, replication factor configuration is not changed and replicas are not reassigned automatically. However, you can use the kafka-reassign-partitions.sh tool to change the replication factor, and manually reassign replicas to brokers. Alternatively, though the integration of Cruise Control for AMQ Streams cannot change the replication factor for topics, the optimization proposals it generates for rebalancing Kafka include commands that transfer partition replicas and change partition leadership. 5.2.2.5. Handling changes to topics A fundamental problem that the Topic Operator needs to solve is that there is no single source of truth: both the KafkaTopic resource and the Kafka topic can be modified independently of the Topic Operator. Complicating this, the Topic Operator might not always be able to observe changes at each end in real time. For example, when the Topic Operator is down. To resolve this, the Topic Operator maintains information about each topic in the topic store. When a change happens in the Kafka cluster or OpenShift, it looks at both the state of the other system and the topic store in order to determine what needs to change to keep everything in sync. The same thing happens whenever the Topic Operator starts, and periodically while it is running. For example, suppose the Topic Operator is not running, and a KafkaTopic called my-topic is created. When the Topic Operator starts, the topic store does not contain information on my-topic, so it can infer that the KafkaTopic was created after it was last running. The Topic Operator creates the topic corresponding to my-topic, and also stores metadata for my-topic in the topic store. If you update Kafka topic configuration or apply a change through the KafkaTopic custom resource, the topic store is updated after the Kafka cluster is reconciled. The topic store also allows the Topic Operator to manage scenarios where the topic configuration is changed in Kafka topics and updated through OpenShift KafkaTopic custom resources, as long as the changes are not incompatible. For example, it is possible to make changes to the same topic config key, but to different values. For incompatible changes, the Kafka configuration takes priority, and the KafkaTopic is updated accordingly. Note You can also use the KafkaTopic resource to delete topics using a oc delete -f KAFKA-TOPIC-CONFIG-FILE command. To be able to do this, delete.topic.enable must be set to true (default) in the spec.kafka.config of the Kafka resource. Additional resources Downgrading AMQ Streams Producer configuration tuning and data durability Scaling cluster and partition reassignment Cruise Control for cluster rebalancing 5.2.3. Configuring a Kafka topic Use the properties of the KafkaTopic resource to configure a Kafka topic. You can use oc apply to create or modify topics, and oc delete to delete existing topics. For example: oc apply -f <topic-config-file> oc delete KafkaTopic <topic-name> This procedure shows how to create a topic with 10 partitions and 2 replicas. Before you start It is important that you consider the following before making your changes: Kafka does not support making the following changes through the KafkaTopic resource: Changing topic names using spec.topicName Decreasing partition size using spec.partitions You cannot use spec.replicas to change the number of replicas that were initially specified. Increasing spec.partitions for topics with keys will change how records are partitioned, which can be particularly problematic when the topic uses semantic partitioning . Prerequisites A running Kafka cluster configured with a Kafka broker listener using TLS authentication and encryption . A running Topic Operator (typically deployed with the Entity Operator ). For deleting a topic, delete.topic.enable=true (default) in the spec.kafka.config of the Kafka resource. Procedure Prepare a file containing the KafkaTopic to be created. An example KafkaTopic apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: orders labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2 Tip When modifying a topic, you can get the current version of the resource using oc get kafkatopic orders -o yaml . Create the KafkaTopic resource in OpenShift. oc apply -f TOPIC-CONFIG-FILE 5.2.4. Configuring the Topic Operator with resource requests and limits You can allocate resources, such as CPU and memory, to the Topic Operator and set a limit on the amount of resources it can consume. Prerequisites The Cluster Operator is running. Procedure Update the Kafka cluster configuration in an editor, as required: oc edit kafka MY-CLUSTER In the spec.entityOperator.topicOperator.resources property in the Kafka resource, set the resource requests and limits for the Topic Operator. apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # Kafka and ZooKeeper sections... entityOperator: topicOperator: resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi Apply the new configuration to create or update the resource. oc apply -f KAFKA-CONFIG-FILE 5.3. Using the User Operator When you create, modify or delete a user using the KafkaUser resource, the User Operator ensures those changes are reflected in the Kafka cluster. The Deploying and Upgrading AMQ Streams on OpenShift guide provides instructions to deploy the User Operator: Using the Cluster Operator (recommended) Standalone to operate with Kafka clusters not managed by AMQ Streams For more information about the schema, see KafkaUser schema reference . Authenticating and authorizing access to Kafka Use KafkaUser to enable the authentication and authorization mechanisms that a specific client uses to access Kafka. For more information on using KafkUser to manage users and secure access to Kafka brokers, see Securing access to Kafka brokers . 5.3.1. Configuring the User Operator with resource requests and limits You can allocate resources, such as CPU and memory, to the User Operator and set a limit on the amount of resources it can consume. Prerequisites The Cluster Operator is running. Procedure Update the Kafka cluster configuration in an editor, as required: oc edit kafka MY-CLUSTER In the spec.entityOperator.userOperator.resources property in the Kafka resource, set the resource requests and limits for the User Operator. apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # Kafka and ZooKeeper sections... entityOperator: userOperator: resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi Save the file and exit the editor. The Cluster Operator applies the changes automatically. 5.4. Monitoring operators using Prometheus metrics AMQ Streams operators expose Prometheus metrics. The metrics are automatically enabled and contain information about: Number of reconciliations Number of Custom Resources the operator is processing Duration of reconciliations JVM metrics from the operators Additionally, we provide example Grafana dashboards. For more information, see Setting up metrics and dashboards for AMQ Streams in the Deploying and upgrading AMQ Streams on OpenShift guide.
[ "env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace", "env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2", "env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2", "env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64", "<cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc. cluster.local", "create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml", "edit cm strimzi-cluster-operator", "# env: # - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 #", "apiVersion: v1 kind: ServiceAccount metadata: name: strimzi-cluster-operator labels: app: strimzi", "apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: strimzi-cluster-operator strimzi.io/kind: cluster-operator template: #", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-namespaced labels: app: strimzi rules: - apiGroups: - \"rbac.authorization.k8s.io\" resources: # The cluster operator needs to access and manage rolebindings to grant Strimzi components cluster permissions - rolebindings verbs: - get - list - watch - create - delete - patch - update - apiGroups: - \"rbac.authorization.k8s.io\" resources: # The cluster operator needs to access and manage roles to grant the entity operator permissions - roles verbs: - get - list - watch - create - delete - patch - update - apiGroups: - \"\" resources: # The cluster operator needs to access and delete pods, this is to allow it to monitor pod health and coordinate rolling updates - pods # The cluster operator needs to access and manage service accounts to grant Strimzi components cluster permissions - serviceaccounts # The cluster operator needs to access and manage config maps for Strimzi components configuration - configmaps # The cluster operator needs to access and manage services and endpoints to expose Strimzi components to network traffic - services - endpoints # The cluster operator needs to access and manage secrets to handle credentials - secrets # The cluster operator needs to access and manage persistent volume claims to bind them to Strimzi components for persistent data - persistentvolumeclaims verbs: - get - list - watch - create - delete - patch - update - apiGroups: - \"kafka.strimzi.io\" resources: # The cluster operator runs the KafkaAssemblyOperator, which needs to access and manage Kafka resources - kafkas - kafkas/status # The cluster operator runs the KafkaConnectAssemblyOperator, which needs to access and manage KafkaConnect resources - kafkaconnects - kafkaconnects/status # The cluster operator runs the KafkaConnectS2IAssemblyOperator, which needs to access and manage KafkaConnectS2I resources - kafkaconnects2is - kafkaconnects2is/status # The cluster operator runs the KafkaConnectorAssemblyOperator, which needs to access and manage KafkaConnector resources - kafkaconnectors - kafkaconnectors/status # The cluster operator runs the KafkaMirrorMakerAssemblyOperator, which needs to access and manage KafkaMirrorMaker resources - kafkamirrormakers - kafkamirrormakers/status # The cluster operator runs the KafkaBridgeAssemblyOperator, which needs to access and manage BridgeMaker resources - kafkabridges - kafkabridges/status # The cluster operator runs the KafkaMirrorMaker2AssemblyOperator, which needs to access and manage KafkaMirrorMaker2 resources - kafkamirrormaker2s - kafkamirrormaker2s/status # The cluster operator runs the KafkaRebalanceAssemblyOperator, which needs to access and manage KafkaRebalance resources - kafkarebalances - kafkarebalances/status verbs: - get - list - watch - create - delete - patch - update - apiGroups: # The cluster operator needs the extensions api as the operator supports Kubernetes version 1.11+ # apps/v1 was introduced in Kubernetes 1.14 - \"extensions\" resources: # The cluster operator needs to access and manage deployments to run deployment based Strimzi components - deployments - deployments/scale # The cluster operator needs to access replica sets to manage Strimzi components and to determine error states - replicasets # The cluster operator needs to access and manage replication controllers to manage replicasets - replicationcontrollers # The cluster operator needs to access and manage network policies to lock down communication between Strimzi components - networkpolicies # The cluster operator needs to access and manage ingresses which allow external access to the services in a cluster - ingresses verbs: - get - list - watch - create - delete - patch - update - apiGroups: - \"apps\" resources: # The cluster operator needs to access and manage deployments to run deployment based Strimzi components - deployments - deployments/scale - deployments/status # The cluster operator needs to access and manage stateful sets to run stateful sets based Strimzi components - statefulsets # The cluster operator needs to access replica-sets to manage Strimzi components and to determine error states - replicasets verbs: - get - list - watch - create - delete - patch - update - apiGroups: - \"\" resources: # The cluster operator needs to be able to create events and delegate permissions to do so - events verbs: - create - apiGroups: # OpenShift S2I requirements - apps.openshift.io resources: - deploymentconfigs - deploymentconfigs/scale - deploymentconfigs/status - deploymentconfigs/finalizers verbs: - get - list - watch - create - delete - patch - update - apiGroups: # OpenShift S2I requirements - build.openshift.io resources: - buildconfigs - buildconfigs/instantiate - builds verbs: - get - list - watch - create - delete - patch - update - apiGroups: # OpenShift S2I requirements - image.openshift.io resources: - imagestreams - imagestreams/status verbs: - get - list - watch - create - delete - patch - update - apiGroups: - networking.k8s.io resources: # The cluster operator needs to access and manage network policies to lock down communication between Strimzi components - networkpolicies # The cluster operator needs to access and manage ingresses which allow external access to the services in a cluster - ingresses verbs: - get - list - watch - create - delete - patch - update - apiGroups: - route.openshift.io resources: # The cluster operator needs to access and manage routes to expose Strimzi components for external access - routes - routes/custom-host verbs: - get - list - watch - create - delete - patch - update - apiGroups: - policy resources: # The cluster operator needs to access and manage pod disruption budgets this limits the number of concurrent disruptions # that a Strimzi component experiences, allowing for higher availability - poddisruptionbudgets verbs: - get - list - watch - create - delete - patch - update", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-global labels: app: strimzi rules: - apiGroups: - \"rbac.authorization.k8s.io\" resources: # The cluster operator needs to create and manage cluster role bindings in the case of an install where a user # has specified they want their cluster role bindings generated - clusterrolebindings verbs: - get - list - watch - create - delete - patch - update - apiGroups: - storage.k8s.io resources: # The cluster operator requires \"get\" permissions to view storage class details # This is because only a persistent volume of a supported storage class type can be resized - storageclasses verbs: - get - apiGroups: - \"\" resources: # The cluster operator requires \"list\" permissions to view all nodes in a cluster # The listing is used to determine the node addresses when NodePort access is configured # These addresses are then exposed in the custom resource states - nodes verbs: - list", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-broker labels: app: strimzi rules: - apiGroups: - \"\" resources: # The Kafka Brokers require \"get\" permissions to view the node they are on # This information is used to generate a Rack ID that is used for High Availability configurations - nodes verbs: - get", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-entity-operator labels: app: strimzi rules: - apiGroups: - \"kafka.strimzi.io\" resources: # The entity operator runs the KafkaTopic assembly operator, which needs to access and manage KafkaTopic resources - kafkatopics - kafkatopics/status # The entity operator runs the KafkaUser assembly operator, which needs to access and manage KafkaUser resources - kafkausers - kafkausers/status verbs: - get - list - watch - create - patch - update - delete - apiGroups: - \"\" resources: - events verbs: # The entity operator needs to be able to create events - create - apiGroups: - \"\" resources: # The entity operator user-operator needs to access and manage secrets to store generated credentials - secrets verbs: - get - list - watch - create - delete - patch - update", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-kafka-client labels: app: strimzi rules: - apiGroups: - \"\" resources: # The Kafka clients (Connect, Mirror Maker, etc.) require \"get\" permissions to view the node they are on # This information is used to generate a Rack ID (client.rack option) that is used for consuming from the closest # replicas when enabled - nodes verbs: - get", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-global apiGroup: rbac.authorization.k8s.io", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-broker-delegation labels: app: strimzi The Kafka broker cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Kafka brokers. This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-broker apiGroup: rbac.authorization.k8s.io", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: strimzi-cluster-operator-kafka-client-delegation labels: app: strimzi The Kafka clients cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Kafka clients using it for consuming from closest replica. This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-kafka-client apiGroup: rbac.authorization.k8s.io", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator labels: app: strimzi subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-cluster-operator-namespaced apiGroup: rbac.authorization.k8s.io", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-entity-operator-delegation labels: app: strimzi The Entity Operator cluster role must be bound to the cluster operator service account so that it can delegate the cluster role to the Entity Operator. This must be done to avoid escalating privileges which would be blocked by Kubernetes. subjects: - kind: ServiceAccount name: strimzi-cluster-operator namespace: myproject roleRef: kind: ClusterRole name: strimzi-entity-operator apiGroup: rbac.authorization.k8s.io", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: topic-name-1 labels: strimzi.io/cluster: my-cluster", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: topic-name-1 spec: topicName: topicName-1 1 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: topic-name-1 spec: topicName: name-2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: mytopic---c55e57fe2546a33f9e603caf57165db4072e827e spec: topicName: myTopic #", "run kafka-admin -ti --image=registry.redhat.io/amq7/amq-streams-kafka-27-rhel7:1.7.0 --rm=true --restart=Never -- ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi-topic-operator-kstreams-topic-store-changelog --delete && ./bin/kafka-topics.sh --bootstrap-server localhost:9092 --topic __strimzi_store_topic --delete", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: my-cluster spec: partitions: 1 1 replicas: 3 2 config: min.insync.replicas=2 3 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: orders labels: strimzi.io/cluster: my-cluster spec: partitions: 10 replicas: 2", "apply -f TOPIC-CONFIG-FILE", "edit kafka MY-CLUSTER", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # Kafka and ZooKeeper sections entityOperator: topicOperator: resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi", "apply -f KAFKA-CONFIG-FILE", "edit kafka MY-CLUSTER", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # Kafka and ZooKeeper sections entityOperator: userOperator: resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/using_amq_streams_on_openshift/assembly-operators-str
Chapter 15. Migrating virtual machine instances between Compute nodes
Chapter 15. Migrating virtual machine instances between Compute nodes You sometimes need to migrate instances from one Compute node to another Compute node in the overcloud, to perform maintenance, rebalance the workload, or replace a failed or failing node. Compute node maintenance If you need to temporarily take a Compute node out of service, for instance, to perform hardware maintenance or repair, kernel upgrades and software updates, you can migrate instances running on the Compute node to another Compute node. Failing Compute node If a Compute node is about to fail and you need to service it or replace it, you can migrate instances from the failing Compute node to a healthy Compute node. Failed Compute nodes If a Compute node has already failed, you can evacuate the instances. You can rebuild instances from the original image on another Compute node, using the same name, UUID, network addresses, and any other allocated resources the instance had before the Compute node failed. Workload rebalancing You can migrate one or more instances to another Compute node to rebalance the workload. For example, you can consolidate instances on a Compute node to conserve power, migrate instances to a Compute node that is physically closer to other networked resources to reduce latency, or distribute instances across Compute nodes to avoid hot spots and increase resiliency. Director configures all Compute nodes to provide secure migration. All Compute nodes also require a shared SSH key to provide the users of each host with access to other Compute nodes during the migration process. Director creates this key using the OS::TripleO::Services::NovaCompute composable service. This composable service is one of the main services included on all Compute roles by default. For more information, see Composable services and custom roles in the Customizing your Red Hat OpenStack Platform deployment guide. Note If you have a functioning Compute node, and you want to make a copy of an instance for backup purposes, or to copy the instance to a different environment, follow the procedure in Importing virtual machines into the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. 15.1. Migration types Red Hat OpenStack Platform (RHOSP) supports the following types of migration. Cold migration Cold migration, or non-live migration, involves shutting down a running instance before migrating it from the source Compute node to the destination Compute node. Cold migration involves some downtime for the instance. The migrated instance maintains access to the same volumes and IP addresses. Note Cold migration requires that both the source and destination Compute nodes are running. Live migration Live migration involves moving the instance from the source Compute node to the destination Compute node without shutting it down, and while maintaining state consistency. Live migrating an instance involves little or no perceptible downtime. However, live migration does impact performance for the duration of the migration operation. Therefore, instances should be taken out of the critical path while being migrated. Important Live migration impacts the performance of the workload being moved. Red Hat does not provide support for increased packet loss, network latency, memory latency or a reduction in network bandwith, memory bandwidth, storage IO, or CPU peformance during live migration. Note Live migration requires that both the source and destination Compute nodes are running. In some cases, instances cannot use live migration. For more information, see Migration constraints . Evacuation If you need to migrate instances because the source Compute node has already failed, you can evacuate the instances. 15.2. Migration constraints Migration constraints typically arise with block migration, configuration disks, or when one or more instances access physical hardware on the Compute node. CPU constraints The source and destination Compute nodes must have the same CPU architecture. For example, Red Hat does not support migrating an instance from a ppc64le CPU to a x86_64 CPU. Migration between different CPU models is not supported. In some cases, the CPU of the source and destination Compute node must match exactly, such as instances that use CPU host passthrough. In all cases, the CPU features of the destination node must be a superset of the CPU features on the source node. Memory constraints The destination Compute node must have sufficient available RAM. Memory oversubscription can cause migration to fail. Block migration constraints Migrating instances that use disks that are stored locally on a Compute node takes significantly longer than migrating volume-backed instances that use shared storage, such as Red Hat Ceph Storage. This latency arises because OpenStack Compute (nova) migrates local disks block-by-block between the Compute nodes over the control plane network by default. By contrast, volume-backed instances that use shared storage, such as Red Hat Ceph Storage, do not have to migrate the volumes, because each Compute node already has access to the shared storage. Note Network congestion in the control plane network caused by migrating local disks or instances that consume large amounts of RAM might impact the performance of other systems that use the control plane network, such as RabbitMQ. Read-only drive migration constraints Migrating a drive is supported only if the drive has both read and write capabilities. For example, OpenStack Compute (nova) cannot migrate a CD-ROM drive or a read-only config drive. However, OpenStack Compute (nova) can migrate a drive with both read and write capabilities, including a config drive with a drive format such as vfat . Live migration constraints In some cases, live migrating instances involves additional constraints. Important Live migration impacts the performance of the workload being moved. Red Hat does not provide support for increased packet loss, network latency, memory latency or a reduction in network bandwidth, memory bandwidth, storage IO, or CPU performance during live migration. No new operations during migration To achieve state consistency between the copies of the instance on the source and destination nodes, RHOSP must prevent new operations during live migration. Otherwise, live migration might take a long time or potentially never end if writes to memory occur faster than live migration can replicate the state of the memory. CPU pinning with NUMA The NovaSchedulerEnabledFilters parameter in the Compute configuration must include the values AggregateInstanceExtraSpecsFilter and NUMATopologyFilter . Multi-cell clouds In a multi-cell cloud, you can live migrate instances to a different host in the same cell, but not across cells. Floating instances When live migrating floating instances, if the configuration of NovaComputeCpuSharedSet on the destination Compute node is different from the configuration of NovaComputeCpuSharedSet on the source Compute node, the instances will not be allocated to the CPUs configured for shared (unpinned) instances on the destination Compute node. Therefore, if you need to live migrate floating instances, you must configure all the Compute nodes with the same CPU mappings for dedicated (pinned) and shared (unpinned) instances, or use a host aggregate for the shared instances. Destination Compute node capacity The destination Compute node must have sufficient capacity to host the instance that you want to migrate. SR-IOV live migration Instances with SR-IOV-based network interfaces can be live migrated. Live migrating instances with direct mode SR-IOV network interfaces incurs network downtime. This is because the direct mode interfaces need to be detached and re-attached during the migration. Live migration on ML2/OVS deployments During the live migration process, when the virtual machine is unpaused in the destination host, the metadata service might not be available because the metadata server proxy has not yet spawned. This unavailability is brief. The service becomes available momentarily and the live migration succeeds. Constraints that preclude live migration You cannot live migrate an instance that uses the following features. PCI passthrough QEMU/KVM hypervisors support attaching PCI devices on the Compute node to an instance. Use PCI passthrough to give an instance exclusive access to PCI devices, which appear and behave as if they are physically attached to the operating system of the instance. However, because PCI passthrough involves direct access to the physical devices, QEMU/KVM does not support live migration of instances using PCI passthrough. Port resource requests You cannot live migrate an instance that uses a port that has resource requests, such as a guaranteed minimum bandwidth QoS policy. Use the following command to check if a port has resource requests: 15.3. Preparing to migrate Before you migrate one or more instances, you need to determine the Compute node names and the IDs of the instances to migrate. Procedure Identify the source Compute node host name and the destination Compute node host name: List the instances on the source Compute node and locate the ID of the instance or instances that you want to migrate: Replace <source> with the name or ID of the source Compute node. Optional: If you are migrating instances from a source Compute node to perform maintenance on the node, you must disable the node to prevent the scheduler from assigning new instances to the node during maintenance: Replace <source> with the host name of the source Compute node. You are now ready to perform the migration. Follow the required procedure detailed in Cold migrating an instance or Live migrating an instance . 15.4. Cold migrating an instance Cold migrating an instance involves stopping the instance and moving it to another Compute node. Cold migration facilitates migration scenarios that live migrating cannot facilitate, such as migrating instances that use PCI passthrough. The scheduler automatically selects the destination Compute node. For more information, see Migration constraints . Procedure To cold migrate an instance, enter the following command to power off and move the instance: Replace <instance> with the name or ID of the instance to migrate. Specify the --block-migration flag if migrating a locally stored volume. Wait for migration to complete. While you wait for the instance migration to complete, you can check the migration status. For more information, see Checking migration status . Check the status of the instance: A status of "VERIFY_RESIZE" indicates you need to confirm or revert the migration: If the migration worked as expected, confirm it: Replace <instance> with the name or ID of the instance to migrate. A status of "ACTIVE" indicates that the instance is ready to use. If the migration did not work as expected, revert it: Replace <instance> with the name or ID of the instance. Restart the instance: Replace <instance> with the name or ID of the instance. Optional: If you disabled the source Compute node for maintenance, you must re-enable the node so that new instances can be assigned to it: Replace <source> with the host name of the source Compute node. 15.5. Live migrating an instance Live migration moves an instance from a source Compute node to a destination Compute node with a minimal amount of downtime. Live migration might not be appropriate for all instances. For more information, see Migration constraints . Procedure To live migrate an instance, specify the instance and the destination Compute node: Replace <instance> with the name or ID of the instance. Replace <dest> with the name or ID of the destination Compute node. Note The openstack server migrate command covers migrating instances with shared storage, which is the default. Specify the --block-migration flag to migrate a locally stored volume: Confirm that the instance is migrating: Wait for migration to complete. While you wait for the instance migration to complete, you can check the migration status. For more information, see Checking migration status . Check the status of the instance to confirm if the migration was successful: Replace <dest> with the name or ID of the destination Compute node. Optional: If you disabled the source Compute node for maintenance, you must re-enable the node so that new instances can be assigned to it: Replace <source> with the host name of the source Compute node. 15.6. Checking migration status Migration involves several state transitions before migration is complete. During a healthy migration, the migration state typically transitions as follows: Queued: The Compute service has accepted the request to migrate an instance, and migration is pending. Preparing: The Compute service is preparing to migrate the instance. Running: The Compute service is migrating the instance. Post-migrating: The Compute service has built the instance on the destination Compute node and is releasing resources on the source Compute node. Completed: The Compute service has completed migrating the instance and finished releasing resources on the source Compute node. Procedure Retrieve the list of migration IDs for the instance: Replace <instance> with the name or ID of the instance. Show the status of the migration: Replace <instance> with the name or ID of the instance. Replace <migration_id> with the ID of the migration. Running the openstack server migration show command returns the following example output: Tip The Compute service measures progress of the migration by the number of remaining memory bytes to copy. If this number does not decrease over time, the migration might be unable to complete, and the Compute service might abort it. Sometimes instance migration can take a long time or encounter errors. For more information, see Troubleshooting migration . 15.7. Evacuating an instance If you want to move an instance from a failed or shut-down Compute node to a new host in the same environment, you can evacuate it. The evacuate process destroys the original instance and rebuilds it on another Compute node using the original image, instance name, UUID, network addresses, and any other resources the original instance had allocated to it. If the instance uses shared storage, the instance root disk is not rebuilt during the evacuate process, as the disk remains accessible by the destination Compute node. If the instance does not use shared storage, then the instance root disk is also rebuilt on the destination Compute node. Note You can only perform an evacuation when the Compute node is fenced, and the API reports that the state of the Compute node is "down" or "forced-down". If the Compute node is not reported as "down" or "forced-down", the evacuate command fails. To perform an evacuation, you must be a cloud administrator. 15.7.1. Evacuating an instance To evacuate all instances on a host, you must evacuate them one at a time. Procedure Confirm that the instance is not running: Replace <node> with the name or UUID of the Compute node that hosts the instance. Check the instance task state: Replace <instance> with the name or UUID of the instance that you want to evacuate. Note If the instance task state is not "NONE" the evacuation might fail. Confirm that the host Compute node is fenced or shut down: Replace <node> with the name or UUID of the Compute node that hosts the instance to evacuate. To perform an evacuation, the Compute node must have a status of down or forced-down . Disable the Compute node: Replace <node> with the name of the Compute node to evacuate the instance from. Replace <disable_host_reason> with details about why you disabled the Compute node. Evacuate the instance: Optional: Replace <dest> with the name of the Compute node to evacuate the instance to. If you do not specify the destination Compute node, the Compute scheduler selects one for you. You can find possible Compute nodes by using the following command: Optional: Replace <password> with the administrative password required to access the evacuated instance. If a password is not specified, a random password is generated and output when the evacuation is complete. Note The password is changed only when ephemeral instance disks are stored on the local hypervisor disk. The password is not changed if the instance is hosted on shared storage or has a Block Storage volume attached, and no error message is displayed to inform you that the password was not changed. Replace <instance> with the name or ID of the instance to evacuate. Note If the evacuation fails and the task state of the instance is not "NONE", contact Red Hat Support for help to recover the instance. Optional: Enable the Compute node when it is recovered: Replace <node> with the name of the Compute node to enable. 15.8. Troubleshooting migration The following issues can arise during instance migration: The migration process encounters errors. The migration process never ends. Performance of the instance degrades after migration. 15.8.1. Errors during migration The following issues can send the migration operation into an error state: Running a cluster with different versions of Red Hat OpenStack Platform (RHOSP). Specifying an instance ID that cannot be found. The instance you are trying to migrate is in an error state. The Compute service is shutting down. A race condition occurs. Live migration enters a failed state. When live migration enters a failed state, it is typically followed by an error state. The following common issues can cause a failed state: A destination Compute host is not available. A scheduler exception occurs. The rebuild process fails due to insufficient computing resources. A server group check fails. The instance on the source Compute node gets deleted before migration to the destination Compute node is complete. 15.8.2. Never-ending live migration Live migration can fail to complete, which leaves migration in a perpetual running state. A common reason for a live migration that never completes is that client requests to the instance running on the source Compute node create changes that occur faster than the Compute service can replicate them to the destination Compute node. Use one of the following methods to address this situation: Abort the live migration. Force the live migration to complete. Aborting live migration If the instance state changes faster than the migration procedure can copy it to the destination node, and you do not want to temporarily suspend the instance operations, you can abort the live migration. Procedure Retrieve the list of migrations for the instance: Replace <instance> with the name or ID of the instance. Abort the live migration: Replace <instance> with the name or ID of the instance. Replace <migration_id> with the ID of the migration. Forcing live migration to complete If the instance state changes faster than the migration procedure can copy it to the destination node, and you want to temporarily suspend the instance operations to force migration to complete, you can force the live migration procedure to complete. Important Forcing live migration to complete might lead to perceptible downtime. Procedure Retrieve the list of migrations for the instance: Replace <instance> with the name or ID of the instance. Force the live migration to complete: Replace <instance> with the name or ID of the instance. Replace <migration_id> with the ID of the migration. 15.8.3. Instance performance degrades after migration For instances that use a NUMA topology, the source and destination Compute nodes must have the same NUMA topology and configuration. The NUMA topology of the destination Compute node must have sufficient resources available. If the NUMA configuration between the source and destination Compute nodes is not the same, it is possible that live migration succeeds while the instance performance degrades. For example, if the source Compute node maps NIC 1 to NUMA node 0, but the destination Compute node maps NIC 1 to NUMA node 5, after migration the instance might route network traffic from a first CPU across the bus to a second CPU with NUMA node 5 to route traffic to NIC 1. This can result in expected behavior, but degraded performance. Similarly, if NUMA node 0 on the source Compute node has sufficient available CPU and RAM, but NUMA node 0 on the destination Compute node already has instances using some of the resources, the instance might run correctly but suffer performance degradation. For more information, see Migration constraints .
[ "openstack port show <port_name/port_id>", "(undercloud)USD source ~/overcloudrc (overcloud)USD openstack compute service list", "(overcloud)USD openstack server list --host <source> --all-projects", "(overcloud)USD openstack compute service set <source> nova-compute --disable", "(overcloud)USD openstack server migrate <instance> --wait", "(overcloud)USD openstack server list --all-projects", "(overcloud)USD openstack server resize --confirm <instance>", "(overcloud)USD openstack server resize --revert <instance>", "(overcloud)USD openstack server start <instance>", "(overcloud)USD openstack compute service set <source> nova-compute --enable", "(overcloud)USD openstack server migrate <instance> --live-migration [--host <dest>] --wait", "(overcloud)USD openstack server migrate <instance> --live-migration [--host <dest>] --wait --block-migration", "(overcloud)USD openstack server show <instance> +----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | ... | ... | | status | MIGRATING | | ... | ... | +----------------------+--------------------------------------+", "(overcloud)USD openstack server list --host <dest> --all-projects", "(overcloud)USD openstack compute service set <source> nova-compute --enable", "openstack server migration list --server <instance> +----+-------------+----------- (...) | Id | Source Node | Dest Node | (...) +----+-------------+-----------+ (...) | 2 | - | - | (...) +----+-------------+-----------+ (...)", "openstack server migration show <instance> <migration_id>", "+------------------------+--------------------------------------+ | Property | Value | +------------------------+--------------------------------------+ | created_at | 2017-03-08T02:53:06.000000 | | dest_compute | controller | | dest_host | - | | dest_node | - | | disk_processed_bytes | 0 | | disk_remaining_bytes | 0 | | disk_total_bytes | 0 | | id | 2 | | memory_processed_bytes | 65502513 | | memory_remaining_bytes | 786427904 | | memory_total_bytes | 1091379200 | | server_uuid | d1df1b5a-70c4-4fed-98b7-423362f2c47c | | source_compute | compute2 | | source_node | - | | status | running | | updated_at | 2017-03-08T02:53:47.000000 | +------------------------+--------------------------------------+", "(overcloud)USD openstack server list --host <node> --all-projects", "(overcloud)USD openstack server show <instance> +----------------------+--------------------------------------+ | Field | Value | +----------------------+--------------------------------------+ | ... | ... | | status | NONE | | ... | ... | +----------------------+--------------------------------------+", "(overcloud)USD openstack baremetal node show <node>", "(overcloud)USD openstack compute service set <node> nova-compute --disable --disable-reason <disable_host_reason>", "(overcloud)USD openstack server evacuate [--host <dest>] [--password <password>] <instance>", "(overcloud)[stack@director ~]USD openstack hypervisor list", "(overcloud)USD openstack compute service set <node> nova-compute --enable", "openstack server migration list --server <instance>", "openstack server migration abort <instance> <migration_id>", "openstack server migration list --server <instance>", "openstack server migration force complete <instance> <migration_id>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_compute_service_for_instance_creation/assembly_migrating-virtual-machine-instances-between-compute-nodes_migrating-instances
Chapter 3. ClusterServiceVersion [operators.coreos.com/v1alpha1]
Chapter 3. ClusterServiceVersion [operators.coreos.com/v1alpha1] Description ClusterServiceVersion is a Custom Resource of type ClusterServiceVersionSpec . Type object Required metadata spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ClusterServiceVersionSpec declarations tell OLM how to install an operator that can manage apps for a given version. status object ClusterServiceVersionStatus represents information about the status of a CSV. Status may trail the actual state of a system. 3.1.1. .spec Description ClusterServiceVersionSpec declarations tell OLM how to install an operator that can manage apps for a given version. Type object Required displayName install Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. apiservicedefinitions object APIServiceDefinitions declares all of the extension apis managed or required by an operator being ran by ClusterServiceVersion. cleanup object Cleanup specifies the cleanup behaviour when the CSV gets deleted customresourcedefinitions object CustomResourceDefinitions declares all of the CRDs managed or required by an operator being ran by ClusterServiceVersion. If the CRD is present in the Owned list, it is implicitly required. description string Description of the operator. Can include the features, limitations or use-cases of the operator. displayName string The name of the operator in display format. icon array The icon for this operator. icon[] object install object NamedInstallStrategy represents the block of an ClusterServiceVersion resource where the install strategy is specified. installModes array InstallModes specify supported installation types installModes[] object InstallMode associates an InstallModeType with a flag representing if the CSV supports it keywords array (string) A list of keywords describing the operator. labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. links array A list of links related to the operator. links[] object maintainers array A list of organizational entities maintaining the operator. maintainers[] object maturity string minKubeVersion string nativeAPIs array nativeAPIs[] object GroupVersionKind unambiguously identifies a kind. It doesn't anonymously include GroupVersion to avoid automatic coercion. It doesn't use a GroupVersion to avoid custom marshalling provider object The publishing entity behind the operator. relatedImages array List any related images, or other container images that your Operator might require to perform their functions. This list should also include operand images as well. All image references should be specified by digest (SHA) and not by tag. This field is only used during catalog creation and plays no part in cluster runtime. relatedImages[] object replaces string The name of a CSV this one replaces. Should match the metadata.Name field of the old CSV. selector object Label selector for related resources. skips array (string) The name(s) of one or more CSV(s) that should be skipped in the upgrade graph. Should match the metadata.Name field of the CSV that should be skipped. This field is only used during catalog creation and plays no part in cluster runtime. version string webhookdefinitions array webhookdefinitions[] object WebhookDescription provides details to OLM about required webhooks 3.1.2. .spec.apiservicedefinitions Description APIServiceDefinitions declares all of the extension apis managed or required by an operator being ran by ClusterServiceVersion. Type object Property Type Description owned array owned[] object APIServiceDescription provides details to OLM about apis provided via aggregation required array required[] object APIServiceDescription provides details to OLM about apis provided via aggregation 3.1.3. .spec.apiservicedefinitions.owned Description Type array 3.1.4. .spec.apiservicedefinitions.owned[] Description APIServiceDescription provides details to OLM about apis provided via aggregation Type object Required group kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance containerPort integer deploymentName string description string displayName string group string kind string name string resources array resources[] object APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.5. .spec.apiservicedefinitions.owned[].actionDescriptors Description Type array 3.1.6. .spec.apiservicedefinitions.owned[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.7. .spec.apiservicedefinitions.owned[].resources Description Type array 3.1.8. .spec.apiservicedefinitions.owned[].resources[] Description APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. Type object Required kind name version Property Type Description kind string Kind of the referenced resource type. name string Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. version string API Version of the referenced resource type. 3.1.9. .spec.apiservicedefinitions.owned[].specDescriptors Description Type array 3.1.10. .spec.apiservicedefinitions.owned[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.11. .spec.apiservicedefinitions.owned[].statusDescriptors Description Type array 3.1.12. .spec.apiservicedefinitions.owned[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.13. .spec.apiservicedefinitions.required Description Type array 3.1.14. .spec.apiservicedefinitions.required[] Description APIServiceDescription provides details to OLM about apis provided via aggregation Type object Required group kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance containerPort integer deploymentName string description string displayName string group string kind string name string resources array resources[] object APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.15. .spec.apiservicedefinitions.required[].actionDescriptors Description Type array 3.1.16. .spec.apiservicedefinitions.required[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.17. .spec.apiservicedefinitions.required[].resources Description Type array 3.1.18. .spec.apiservicedefinitions.required[].resources[] Description APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. Type object Required kind name version Property Type Description kind string Kind of the referenced resource type. name string Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. version string API Version of the referenced resource type. 3.1.19. .spec.apiservicedefinitions.required[].specDescriptors Description Type array 3.1.20. .spec.apiservicedefinitions.required[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.21. .spec.apiservicedefinitions.required[].statusDescriptors Description Type array 3.1.22. .spec.apiservicedefinitions.required[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.23. .spec.cleanup Description Cleanup specifies the cleanup behaviour when the CSV gets deleted Type object Required enabled Property Type Description enabled boolean 3.1.24. .spec.customresourcedefinitions Description CustomResourceDefinitions declares all of the CRDs managed or required by an operator being ran by ClusterServiceVersion. If the CRD is present in the Owned list, it is implicitly required. Type object Property Type Description owned array owned[] object CRDDescription provides details to OLM about the CRDs required array required[] object CRDDescription provides details to OLM about the CRDs 3.1.25. .spec.customresourcedefinitions.owned Description Type array 3.1.26. .spec.customresourcedefinitions.owned[] Description CRDDescription provides details to OLM about the CRDs Type object Required kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance description string displayName string kind string name string resources array resources[] object APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.27. .spec.customresourcedefinitions.owned[].actionDescriptors Description Type array 3.1.28. .spec.customresourcedefinitions.owned[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.29. .spec.customresourcedefinitions.owned[].resources Description Type array 3.1.30. .spec.customresourcedefinitions.owned[].resources[] Description APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. Type object Required kind name version Property Type Description kind string Kind of the referenced resource type. name string Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. version string API Version of the referenced resource type. 3.1.31. .spec.customresourcedefinitions.owned[].specDescriptors Description Type array 3.1.32. .spec.customresourcedefinitions.owned[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.33. .spec.customresourcedefinitions.owned[].statusDescriptors Description Type array 3.1.34. .spec.customresourcedefinitions.owned[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.35. .spec.customresourcedefinitions.required Description Type array 3.1.36. .spec.customresourcedefinitions.required[] Description CRDDescription provides details to OLM about the CRDs Type object Required kind name version Property Type Description actionDescriptors array actionDescriptors[] object ActionDescriptor describes a declarative action that can be performed on a custom resource instance description string displayName string kind string name string resources array resources[] object APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. specDescriptors array specDescriptors[] object SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it statusDescriptors array statusDescriptors[] object StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it version string 3.1.37. .spec.customresourcedefinitions.required[].actionDescriptors Description Type array 3.1.38. .spec.customresourcedefinitions.required[].actionDescriptors[] Description ActionDescriptor describes a declarative action that can be performed on a custom resource instance Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.39. .spec.customresourcedefinitions.required[].resources Description Type array 3.1.40. .spec.customresourcedefinitions.required[].resources[] Description APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. Type object Required kind name version Property Type Description kind string Kind of the referenced resource type. name string Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. version string API Version of the referenced resource type. 3.1.41. .spec.customresourcedefinitions.required[].specDescriptors Description Type array 3.1.42. .spec.customresourcedefinitions.required[].specDescriptors[] Description SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.43. .spec.customresourcedefinitions.required[].statusDescriptors Description Type array 3.1.44. .spec.customresourcedefinitions.required[].statusDescriptors[] Description StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it Type object Required path Property Type Description description string displayName string path string value string RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. x-descriptors array (string) 3.1.45. .spec.icon Description The icon for this operator. Type array 3.1.46. .spec.icon[] Description Type object Required base64data mediatype Property Type Description base64data string mediatype string 3.1.47. .spec.install Description NamedInstallStrategy represents the block of an ClusterServiceVersion resource where the install strategy is specified. Type object Required strategy Property Type Description spec object StrategyDetailsDeployment represents the parsed details of a Deployment InstallStrategy. strategy string 3.1.48. .spec.install.spec Description StrategyDetailsDeployment represents the parsed details of a Deployment InstallStrategy. Type object Required deployments Property Type Description clusterPermissions array clusterPermissions[] object StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy deployments array deployments[] object StrategyDeploymentSpec contains the name, spec and labels for the deployment ALM should create permissions array permissions[] object StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy 3.1.49. .spec.install.spec.clusterPermissions Description Type array 3.1.50. .spec.install.spec.clusterPermissions[] Description StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy Type object Required rules serviceAccountName Property Type Description rules array rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. serviceAccountName string 3.1.51. .spec.install.spec.clusterPermissions[].rules Description Type array 3.1.52. .spec.install.spec.clusterPermissions[].rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. '*' represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. 3.1.53. .spec.install.spec.deployments Description Type array 3.1.54. .spec.install.spec.deployments[] Description StrategyDeploymentSpec contains the name, spec and labels for the deployment ALM should create Type object Required name spec Property Type Description label object (string) Set is a map of label:value. It implements Labels. name string spec object DeploymentSpec is the specification of the desired behavior of the Deployment. 3.1.55. .spec.install.spec.deployments[].spec Description DeploymentSpec is the specification of the desired behavior of the Deployment. Type object Required selector template Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused boolean Indicates that the deployment is paused. progressDeadlineSeconds integer The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. Defaults to 600s. replicas integer Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. revisionHistoryLimit integer The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector object Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. strategy object The deployment strategy to use to replace existing pods with new ones. template object Template describes the pods that will be created. The only allowed template.spec.restartPolicy value is "Always". 3.1.56. .spec.install.spec.deployments[].spec.selector Description Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.57. .spec.install.spec.deployments[].spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.58. .spec.install.spec.deployments[].spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.59. .spec.install.spec.deployments[].spec.strategy Description The deployment strategy to use to replace existing pods with new ones. Type object Property Type Description rollingUpdate object Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. --- TODO: Update this to follow our convention for oneOf, whatever we decide it to be. type string Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate. 3.1.60. .spec.install.spec.deployments[].spec.strategy.rollingUpdate Description Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. --- TODO: Update this to follow our convention for oneOf, whatever we decide it to be. Type object Property Type Description maxSurge integer-or-string The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 25%. Example: when this is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new ReplicaSet can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods. maxUnavailable integer-or-string The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set to 30%, the old ReplicaSet can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods. 3.1.61. .spec.install.spec.deployments[].spec.template Description Template describes the pods that will be created. The only allowed template.spec.restartPolicy value is "Always". Type object Property Type Description metadata `` Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 3.1.62. .spec.install.spec.deployments[].spec.template.spec Description Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required containers Property Type Description activeDeadlineSeconds integer Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer. affinity object If specified, the pod's scheduling constraints automountServiceAccountToken boolean AutomountServiceAccountToken indicates whether a service account token should be automatically mounted. containers array List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. containers[] object A single application container that you want to run within a pod. dnsConfig object Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy. dnsPolicy string Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. enableServiceLinks boolean EnableServiceLinks indicates whether information about services should be injected into pod's environment variables, matching the syntax of Docker links. Optional: Defaults to true. ephemeralContainers array List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. ephemeralContainers[] object An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. hostAliases array HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostIPC boolean Use the host's ipc namespace. Optional: Default to false. hostNetwork boolean Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. hostPID boolean Use the host's pid namespace. Optional: Default to false. hostUsers boolean Use the host's user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature. hostname string Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value. imagePullSecrets array ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ initContainers[] object A single application container that you want to run within a pod. nodeName string NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. nodeSelector object (string) NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ os object Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set. If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[ ].securityContext.seLinuxOptions - spec.containers[ ].securityContext.seccompProfile - spec.containers[ ].securityContext.capabilities - spec.containers[ ].securityContext.readOnlyRootFilesystem - spec.containers[ ].securityContext.privileged - spec.containers[ ].securityContext.allowPrivilegeEscalation - spec.containers[ ].securityContext.procMount - spec.containers[ ].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup overhead integer-or-string Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md preemptionPolicy string PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. priority integer The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority. priorityClassName string If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. readinessGates array If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates readinessGates[] object PodReadinessGate contains the reference to a pod condition resourceClaims array ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. resourceClaims[] object PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. restartPolicy string Restart policy for all containers within the pod. One of Always, OnFailure, Never. In some contexts, only a subset of those values may be permitted. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy runtimeClassName string RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class schedulerName string If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler. schedulingGates array SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. This is a beta feature enabled by the PodSchedulingReadiness feature gate. schedulingGates[] object PodSchedulingGate is associated to a Pod to guard its scheduling. securityContext object SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. serviceAccount string DeprecatedServiceAccount is a depreciated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ setHostnameAsFQDN boolean If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false. shareProcessNamespace boolean Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false. subdomain string If specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>". If not specified, the pod will not have a domainname at all. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. volumes array List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. 3.1.63. .spec.install.spec.deployments[].spec.template.spec.affinity Description If specified, the pod's scheduling constraints Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 3.1.64. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 3.1.65. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 3.1.66. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 3.1.67. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.68. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.69. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.70. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 3.1.71. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.72. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 3.1.73. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 3.1.74. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 3.1.75. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 3.1.76. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.77. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 3.1.78. .spec.install.spec.deployments[].spec.template.spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 3.1.79. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.80. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.81. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.82. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.83. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.84. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.85. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.86. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.87. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.88. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.89. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.90. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.91. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.92. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.93. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.94. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.95. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.96. .spec.install.spec.deployments[].spec.template.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.97. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 3.1.98. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 3.1.99. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 3.1.100. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.101. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.102. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.103. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.104. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.105. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.106. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.107. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 3.1.108. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 3.1.109. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.110. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.111. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.112. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.113. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.114. .spec.install.spec.deployments[].spec.template.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.115. .spec.install.spec.deployments[].spec.template.spec.containers Description List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. Type array 3.1.116. .spec.install.spec.deployments[].spec.template.spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.117. .spec.install.spec.deployments[].spec.template.spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.118. .spec.install.spec.deployments[].spec.template.spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.119. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.120. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.121. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.122. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.123. .spec.install.spec.deployments[].spec.template.spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.124. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.125. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.126. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.127. .spec.install.spec.deployments[].spec.template.spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.128. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.129. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.130. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.131. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.132. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.133. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.134. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.135. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.136. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.137. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.138. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.139. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.140. .spec.install.spec.deployments[].spec.template.spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.141. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.142. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.143. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.144. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.145. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.146. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.147. .spec.install.spec.deployments[].spec.template.spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.148. .spec.install.spec.deployments[].spec.template.spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.149. .spec.install.spec.deployments[].spec.template.spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.150. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.151. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.152. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.153. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.154. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.155. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.156. .spec.install.spec.deployments[].spec.template.spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.157. .spec.install.spec.deployments[].spec.template.spec.containers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.158. .spec.install.spec.deployments[].spec.template.spec.containers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.159. .spec.install.spec.deployments[].spec.template.spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.160. .spec.install.spec.deployments[].spec.template.spec.containers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.161. .spec.install.spec.deployments[].spec.template.spec.containers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.162. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.163. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.164. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.165. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.166. .spec.install.spec.deployments[].spec.template.spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.167. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.168. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.169. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.170. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.171. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.172. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.173. .spec.install.spec.deployments[].spec.template.spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.174. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.175. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.176. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.177. .spec.install.spec.deployments[].spec.template.spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.178. .spec.install.spec.deployments[].spec.template.spec.dnsConfig Description Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy. Type object Property Type Description nameservers array (string) A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. options array A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. options[] object PodDNSConfigOption defines DNS resolver options of a pod. searches array (string) A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. 3.1.179. .spec.install.spec.deployments[].spec.template.spec.dnsConfig.options Description A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. Type array 3.1.180. .spec.install.spec.deployments[].spec.template.spec.dnsConfig.options[] Description PodDNSConfigOption defines DNS resolver options of a pod. Type object Property Type Description name string Required. value string 3.1.181. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers Description List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. Type array 3.1.182. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[] Description An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Lifecycle is not allowed for ephemeral containers. livenessProbe object Probes are not allowed for ephemeral containers. name string Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers. ports array Ports are not allowed for ephemeral containers. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Probes are not allowed for ephemeral containers. resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod. securityContext object Optional: SecurityContext defines the security options the ephemeral container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. startupProbe object Probes are not allowed for ephemeral containers. stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false targetContainerName string If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec. The container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined. terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.183. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.184. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.185. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.186. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.187. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.188. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.189. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.190. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.191. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.192. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.193. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.194. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle Description Lifecycle is not allowed for ephemeral containers. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.195. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.196. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.197. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.198. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.199. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.200. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.201. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.202. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.203. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.204. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.205. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.206. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.207. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe Description Probes are not allowed for ephemeral containers. Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.208. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.209. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.210. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.211. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.212. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.213. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.214. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].ports Description Ports are not allowed for ephemeral containers. Type array 3.1.215. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.216. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe Description Probes are not allowed for ephemeral containers. Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.217. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.218. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.219. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.220. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.221. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.222. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.223. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.224. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.225. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resources Description Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.226. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.227. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.228. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext Description Optional: SecurityContext defines the security options the ephemeral container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.229. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.230. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.231. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.232. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.233. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe Description Probes are not allowed for ephemeral containers. Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.234. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.235. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.236. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.237. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.238. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.239. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.240. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.241. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.242. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. Type array 3.1.243. .spec.install.spec.deployments[].spec.template.spec.ephemeralContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.244. .spec.install.spec.deployments[].spec.template.spec.hostAliases Description HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. Type array 3.1.245. .spec.install.spec.deployments[].spec.template.spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 3.1.246. .spec.install.spec.deployments[].spec.template.spec.imagePullSecrets Description ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod Type array 3.1.247. .spec.install.spec.deployments[].spec.template.spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.248. .spec.install.spec.deployments[].spec.template.spec.initContainers Description List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ Type array 3.1.249. .spec.install.spec.deployments[].spec.template.spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resizePolicy array Resources resize policy for the container. resizePolicy[] object ContainerResizePolicy represents resource resize policy for the container. resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 3.1.250. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 3.1.251. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 3.1.252. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 3.1.253. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 3.1.254. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.255. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.256. .spec.install.spec.deployments[].spec.template.spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 3.1.257. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 3.1.258. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 3.1.259. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 3.1.260. .spec.install.spec.deployments[].spec.template.spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 3.1.261. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 3.1.262. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.263. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.264. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.265. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.266. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.267. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.268. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 3.1.269. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.270. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.271. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.272. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.273. .spec.install.spec.deployments[].spec.template.spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.274. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.275. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.276. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.277. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.278. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.279. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.280. .spec.install.spec.deployments[].spec.template.spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.281. .spec.install.spec.deployments[].spec.template.spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 3.1.282. .spec.install.spec.deployments[].spec.template.spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 3.1.283. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.284. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.285. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.286. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.287. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.288. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.289. .spec.install.spec.deployments[].spec.template.spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.290. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resizePolicy Description Resources resize policy for the container. Type array 3.1.291. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resizePolicy[] Description ContainerResizePolicy represents resource resize policy for the container. Type object Required resourceName restartPolicy Property Type Description resourceName string Name of the resource to which this resource resize policy applies. Supported values: cpu, memory. restartPolicy string Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. 3.1.292. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.293. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.294. .spec.install.spec.deployments[].spec.template.spec.initContainers[].resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.295. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.296. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 3.1.297. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.298. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.299. .spec.install.spec.deployments[].spec.template.spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.300. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 3.1.301. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 3.1.302. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 3.1.303. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 3.1.304. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 3.1.305. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. value string The header field value 3.1.306. .spec.install.spec.deployments[].spec.template.spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 3.1.307. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 3.1.308. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 3.1.309. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 3.1.310. .spec.install.spec.deployments[].spec.template.spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 3.1.311. .spec.install.spec.deployments[].spec.template.spec.os Description Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set. If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[ ].securityContext.seLinuxOptions - spec.containers[ ].securityContext.seccompProfile - spec.containers[ ].securityContext.capabilities - spec.containers[ ].securityContext.readOnlyRootFilesystem - spec.containers[ ].securityContext.privileged - spec.containers[ ].securityContext.allowPrivilegeEscalation - spec.containers[ ].securityContext.procMount - spec.containers[ ].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup Type object Required name Property Type Description name string Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null 3.1.312. .spec.install.spec.deployments[].spec.template.spec.readinessGates Description If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates Type array 3.1.313. .spec.install.spec.deployments[].spec.template.spec.readinessGates[] Description PodReadinessGate contains the reference to a pod condition Type object Required conditionType Property Type Description conditionType string ConditionType refers to a condition in the pod's condition list with matching type. 3.1.314. .spec.install.spec.deployments[].spec.template.spec.resourceClaims Description ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. Type array 3.1.315. .spec.install.spec.deployments[].spec.template.spec.resourceClaims[] Description PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. Type object Required name Property Type Description name string Name uniquely identifies this resource claim inside the pod. This must be a DNS_LABEL. source object Source describes where to find the ResourceClaim. 3.1.316. .spec.install.spec.deployments[].spec.template.spec.resourceClaims[].source Description Source describes where to find the ResourceClaim. Type object Property Type Description resourceClaimName string ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod. resourceClaimTemplateName string ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod. The template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The name of the ResourceClaim will be <pod name>-<resource name>, where <resource name> is the PodResourceClaim.Name. Pod validation will reject the pod if the concatenated name is not valid for a ResourceClaim (e.g. too long). An existing ResourceClaim with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated resource by mistake. Scheduling and pod startup are then blocked until the unrelated ResourceClaim is removed. This field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim. 3.1.317. .spec.install.spec.deployments[].spec.template.spec.schedulingGates Description SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. SchedulingGates can only be set at pod creation time, and be removed only afterwards. This is a beta feature enabled by the PodSchedulingReadiness feature gate. Type array 3.1.318. .spec.install.spec.deployments[].spec.template.spec.schedulingGates[] Description PodSchedulingGate is associated to a Pod to guard its scheduling. Type object Required name Property Type Description name string Name of the scheduling gate. Each scheduling gate must have a unique name field. 3.1.319. .spec.install.spec.deployments[].spec.template.spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 3.1.320. .spec.install.spec.deployments[].spec.template.spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 3.1.321. .spec.install.spec.deployments[].spec.template.spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 3.1.322. .spec.install.spec.deployments[].spec.template.spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 3.1.323. .spec.install.spec.deployments[].spec.template.spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 3.1.324. .spec.install.spec.deployments[].spec.template.spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 3.1.325. .spec.install.spec.deployments[].spec.template.spec.tolerations Description If specified, the pod's tolerations. Type array 3.1.326. .spec.install.spec.deployments[].spec.template.spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 3.1.327. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints Description TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. Type array 3.1.328. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default). maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 3.1.329. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.330. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.331. .spec.install.spec.deployments[].spec.template.spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.332. .spec.install.spec.deployments[].spec.template.spec.volumes Description List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes Type array 3.1.333. .spec.install.spec.deployments[].spec.template.spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 3.1.334. .spec.install.spec.deployments[].spec.template.spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 3.1.335. .spec.install.spec.deployments[].spec.template.spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 3.1.336. .spec.install.spec.deployments[].spec.template.spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 3.1.337. .spec.install.spec.deployments[].spec.template.spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 3.1.338. .spec.install.spec.deployments[].spec.template.spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.339. .spec.install.spec.deployments[].spec.template.spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 3.1.340. .spec.install.spec.deployments[].spec.template.spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.341. .spec.install.spec.deployments[].spec.template.spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.342. .spec.install.spec.deployments[].spec.template.spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.343. .spec.install.spec.deployments[].spec.template.spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.344. .spec.install.spec.deployments[].spec.template.spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 3.1.345. .spec.install.spec.deployments[].spec.template.spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.346. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.347. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 3.1.348. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 3.1.349. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.350. .spec.install.spec.deployments[].spec.template.spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.351. .spec.install.spec.deployments[].spec.template.spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir 3.1.352. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 3.1.353. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 3.1.354. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 3.1.355. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 3.1.356. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 3.1.357. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. 3.1.358. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 3.1.359. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims Description Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array 3.1.360. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources.claims[] Description ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. 3.1.361. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.362. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.363. .spec.install.spec.deployments[].spec.template.spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.364. .spec.install.spec.deployments[].spec.template.spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 3.1.365. .spec.install.spec.deployments[].spec.template.spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 3.1.366. .spec.install.spec.deployments[].spec.template.spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.367. .spec.install.spec.deployments[].spec.template.spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 3.1.368. .spec.install.spec.deployments[].spec.template.spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 3.1.369. .spec.install.spec.deployments[].spec.template.spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 3.1.370. .spec.install.spec.deployments[].spec.template.spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 3.1.371. .spec.install.spec.deployments[].spec.template.spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 3.1.372. .spec.install.spec.deployments[].spec.template.spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 3.1.373. .spec.install.spec.deployments[].spec.template.spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.374. .spec.install.spec.deployments[].spec.template.spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 3.1.375. .spec.install.spec.deployments[].spec.template.spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 3.1.376. .spec.install.spec.deployments[].spec.template.spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 3.1.377. .spec.install.spec.deployments[].spec.template.spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 3.1.378. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 3.1.379. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources Description sources is the list of volume projections Type array 3.1.380. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 3.1.381. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 3.1.382. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.383. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.384. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 3.1.385. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 3.1.386. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 3.1.387. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 3.1.388. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 3.1.389. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 3.1.390. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.391. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.392. .spec.install.spec.deployments[].spec.template.spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 3.1.393. .spec.install.spec.deployments[].spec.template.spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 3.1.394. .spec.install.spec.deployments[].spec.template.spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 3.1.395. .spec.install.spec.deployments[].spec.template.spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.396. .spec.install.spec.deployments[].spec.template.spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 3.1.397. .spec.install.spec.deployments[].spec.template.spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.398. .spec.install.spec.deployments[].spec.template.spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 3.1.399. .spec.install.spec.deployments[].spec.template.spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 3.1.400. .spec.install.spec.deployments[].spec.template.spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 3.1.401. .spec.install.spec.deployments[].spec.template.spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 3.1.402. .spec.install.spec.deployments[].spec.template.spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 3.1.403. .spec.install.spec.deployments[].spec.template.spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 3.1.404. .spec.install.spec.permissions Description Type array 3.1.405. .spec.install.spec.permissions[] Description StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy Type object Required rules serviceAccountName Property Type Description rules array rules[] object PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. serviceAccountName string 3.1.406. .spec.install.spec.permissions[].rules Description Type array 3.1.407. .spec.install.spec.permissions[].rules[] Description PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. resources array (string) Resources is a list of resources this rule applies to. '*' represents all resources. verbs array (string) Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. 3.1.408. .spec.installModes Description InstallModes specify supported installation types Type array 3.1.409. .spec.installModes[] Description InstallMode associates an InstallModeType with a flag representing if the CSV supports it Type object Required supported type Property Type Description supported boolean type string InstallModeType is a supported type of install mode for CSV installation 3.1.410. .spec.links Description A list of links related to the operator. Type array 3.1.411. .spec.links[] Description Type object Property Type Description name string url string 3.1.412. .spec.maintainers Description A list of organizational entities maintaining the operator. Type array 3.1.413. .spec.maintainers[] Description Type object Property Type Description email string name string 3.1.414. .spec.nativeAPIs Description Type array 3.1.415. .spec.nativeAPIs[] Description GroupVersionKind unambiguously identifies a kind. It doesn't anonymously include GroupVersion to avoid automatic coercion. It doesn't use a GroupVersion to avoid custom marshalling Type object Required group kind version Property Type Description group string kind string version string 3.1.416. .spec.provider Description The publishing entity behind the operator. Type object Property Type Description name string url string 3.1.417. .spec.relatedImages Description List any related images, or other container images that your Operator might require to perform their functions. This list should also include operand images as well. All image references should be specified by digest (SHA) and not by tag. This field is only used during catalog creation and plays no part in cluster runtime. Type array 3.1.418. .spec.relatedImages[] Description Type object Required image name Property Type Description image string name string 3.1.419. .spec.selector Description Label selector for related resources. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.420. .spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.421. .spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.422. .spec.webhookdefinitions Description Type array 3.1.423. .spec.webhookdefinitions[] Description WebhookDescription provides details to OLM about required webhooks Type object Required admissionReviewVersions generateName sideEffects type Property Type Description admissionReviewVersions array (string) containerPort integer conversionCRDs array (string) deploymentName string failurePolicy string FailurePolicyType specifies a failure policy that defines how unrecognized errors from the admission endpoint are handled. generateName string matchPolicy string MatchPolicyType specifies the type of match policy. objectSelector object A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. reinvocationPolicy string ReinvocationPolicyType specifies what type of policy the admission hook uses. rules array rules[] object RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. sideEffects string SideEffectClass specifies the types of side effects a webhook may have. targetPort integer-or-string timeoutSeconds integer type string WebhookAdmissionType is the type of admission webhooks supported by OLM webhookPath string 3.1.424. .spec.webhookdefinitions[].objectSelector Description A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 3.1.425. .spec.webhookdefinitions[].objectSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 3.1.426. .spec.webhookdefinitions[].objectSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 3.1.427. .spec.webhookdefinitions[].rules Description Type array 3.1.428. .spec.webhookdefinitions[].rules[] Description RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 3.1.429. .status Description ClusterServiceVersionStatus represents information about the status of a CSV. Status may trail the actual state of a system. Type object Property Type Description certsLastUpdated string Last time the owned APIService certs were updated certsRotateAt string Time the owned APIService certs will rotate cleanup object CleanupStatus represents information about the status of cleanup while a CSV is pending deletion conditions array List of conditions, a history of state transitions conditions[] object Conditions appear in the status as a record of state transitions on the ClusterServiceVersion lastTransitionTime string Last time the status transitioned from one status to another. lastUpdateTime string Last time we updated the status message string A human readable message indicating details about why the ClusterServiceVersion is in this condition. phase string Current condition of the ClusterServiceVersion reason string A brief CamelCase message indicating details about why the ClusterServiceVersion is in this state. e.g. 'RequirementsNotMet' requirementStatus array The status of each requirement for this CSV requirementStatus[] object 3.1.430. .status.cleanup Description CleanupStatus represents information about the status of cleanup while a CSV is pending deletion Type object Property Type Description pendingDeletion array PendingDeletion is the list of custom resource objects that are pending deletion and blocked on finalizers. This indicates the progress of cleanup that is blocking CSV deletion or operator uninstall. pendingDeletion[] object ResourceList represents a list of resources which are of the same Group/Kind 3.1.431. .status.cleanup.pendingDeletion Description PendingDeletion is the list of custom resource objects that are pending deletion and blocked on finalizers. This indicates the progress of cleanup that is blocking CSV deletion or operator uninstall. Type array 3.1.432. .status.cleanup.pendingDeletion[] Description ResourceList represents a list of resources which are of the same Group/Kind Type object Required group instances kind Property Type Description group string instances array instances[] object kind string 3.1.433. .status.cleanup.pendingDeletion[].instances Description Type array 3.1.434. .status.cleanup.pendingDeletion[].instances[] Description Type object Required name Property Type Description name string namespace string Namespace can be empty for cluster-scoped resources 3.1.435. .status.conditions Description List of conditions, a history of state transitions Type array 3.1.436. .status.conditions[] Description Conditions appear in the status as a record of state transitions on the ClusterServiceVersion Type object Property Type Description lastTransitionTime string Last time the status transitioned from one status to another. lastUpdateTime string Last time we updated the status message string A human readable message indicating details about why the ClusterServiceVersion is in this condition. phase string Condition of the ClusterServiceVersion reason string A brief CamelCase message indicating details about why the ClusterServiceVersion is in this state. e.g. 'RequirementsNotMet' 3.1.437. .status.requirementStatus Description The status of each requirement for this CSV Type array 3.1.438. .status.requirementStatus[] Description Type object Required group kind message name status version Property Type Description dependents array dependents[] object DependentStatus is the status for a dependent requirement (to prevent infinite nesting) group string kind string message string name string status string StatusReason is a camelcased reason for the status of a RequirementStatus or DependentStatus uuid string version string 3.1.439. .status.requirementStatus[].dependents Description Type array 3.1.440. .status.requirementStatus[].dependents[] Description DependentStatus is the status for a dependent requirement (to prevent infinite nesting) Type object Required group kind status version Property Type Description group string kind string message string status string StatusReason is a camelcased reason for the status of a RequirementStatus or DependentStatus uuid string version string 3.2. API endpoints The following API endpoints are available: /apis/operators.coreos.com/v1alpha1/clusterserviceversions GET : list objects of kind ClusterServiceVersion /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions DELETE : delete collection of ClusterServiceVersion GET : list objects of kind ClusterServiceVersion POST : create a ClusterServiceVersion /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name} DELETE : delete a ClusterServiceVersion GET : read the specified ClusterServiceVersion PATCH : partially update the specified ClusterServiceVersion PUT : replace the specified ClusterServiceVersion /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name}/status GET : read status of the specified ClusterServiceVersion PATCH : partially update status of the specified ClusterServiceVersion PUT : replace status of the specified ClusterServiceVersion 3.2.1. /apis/operators.coreos.com/v1alpha1/clusterserviceversions Table 3.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind ClusterServiceVersion Table 3.2. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersionList schema 401 - Unauthorized Empty 3.2.2. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions Table 3.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 3.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ClusterServiceVersion Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterServiceVersion Table 3.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.8. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersionList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterServiceVersion Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.10. Body parameters Parameter Type Description body ClusterServiceVersion schema Table 3.11. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 201 - Created ClusterServiceVersion schema 202 - Accepted ClusterServiceVersion schema 401 - Unauthorized Empty 3.2.3. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the ClusterServiceVersion namespace string object name and auth scope, such as for teams and projects Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ClusterServiceVersion Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterServiceVersion Table 3.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.18. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterServiceVersion Table 3.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.20. Body parameters Parameter Type Description body Patch schema Table 3.21. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterServiceVersion Table 3.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.23. Body parameters Parameter Type Description body ClusterServiceVersion schema Table 3.24. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 201 - Created ClusterServiceVersion schema 401 - Unauthorized Empty 3.2.4. /apis/operators.coreos.com/v1alpha1/namespaces/{namespace}/clusterserviceversions/{name}/status Table 3.25. Global path parameters Parameter Type Description name string name of the ClusterServiceVersion namespace string object name and auth scope, such as for teams and projects Table 3.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ClusterServiceVersion Table 3.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 3.28. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterServiceVersion Table 3.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.30. Body parameters Parameter Type Description body Patch schema Table 3.31. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterServiceVersion Table 3.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.33. Body parameters Parameter Type Description body ClusterServiceVersion schema Table 3.34. HTTP responses HTTP code Reponse body 200 - OK ClusterServiceVersion schema 201 - Created ClusterServiceVersion schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operatorhub_apis/clusterserviceversion-operators-coreos-com-v1alpha1
Chapter 3. Installation and update
Chapter 3. Installation and update 3.1. About OpenShift Container Platform installation The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is an ideal approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the Agent-based Installer for disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command-line interface. This approach is ideal for disconnected environments. Automated : You can deploy a cluster on installer-provisioned infrastructure. The installation program uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters in connected or disconnected environments. Full control : You can deploy a cluster on infrastructure that you prepare and maintain, which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Each method deploys a cluster with the following characteristics: Highly available infrastructure with no single points of failure, which is available by default. Administrators can control what updates are applied and when. 3.1.1. About the installation program You can use the installation program to deploy each type of cluster. The installation program generates the main assets, such as Ignition config files for the bootstrap, control plane, and compute machines. You can start an OpenShift Container Platform cluster with these three machine configurations, provided you correctly configured the infrastructure. The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installations. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel with the ultimate target being a running cluster. The installation program recognizes and uses existing components instead of running commands to create them again because the program meets the dependencies. Figure 3.1. OpenShift Container Platform installation targets and dependencies 3.1.2. About Red Hat Enterprise Linux CoreOS (RHCOS) Post-installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. RHCOS includes the kubelet , which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes. Every control plane machine in an OpenShift Container Platform 4.14 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as a bootable container image, using OSTree as a backend, that is deployed across the cluster by the Machine Config Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree . Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, by in-place upgrades that keep the entire platform up to date. These in-place updates can reduce the burden on operations teams. If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. 3.1.3. Supported platforms for OpenShift Container Platform clusters In OpenShift Container Platform 4.14, you can install a cluster that uses installer-provisioned infrastructure on the following platforms: Alibaba Cloud Amazon Web Services (AWS) Bare metal Google Cloud Platform (GCP) IBM Cloud(R) Microsoft Azure Microsoft Azure Stack Hub Nutanix Red Hat OpenStack Platform (RHOSP) The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware vSphere For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat. Important After installation, the following changes are not supported: Mixing cloud provider platforms. Mixing cloud provider components. For example, using a persistent storage framework from a another platform on the platform where you installed the cluster. In OpenShift Container Platform 4.14, you can install a cluster that uses user-provisioned infrastructure on the following platforms: AWS Azure Azure Stack Hub Bare metal GCP IBM Power(R) IBM Z(R) or IBM(R) LinuxONE RHOSP The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud on AWS VMware vSphere Depending on the supported cases for the platform, you can perform installations on user-provisioned infrastructure, so that you can run machines with full internet access, place your cluster behind a proxy, or perform a disconnected installation. In a disconnected installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a disconnected installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access. The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms. 3.1.4. Installation process Except for the Assisted Installer, when you install an OpenShift Container Platform cluster, you must download the installation program from the appropriate Cluster Type page on the OpenShift Cluster Manager Hybrid Cloud Console. This console manages: REST API for accounts. Registry tokens, which are the pull secrets that you use to obtain the required components. Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics. In OpenShift Container Platform 4.14, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type. Consider the following installation use cases: To deploy a cluster with the Assisted Installer, you must configure the cluster settings by using the Assisted Installer . There is no installation program to download and configure. After you finish setting the cluster configuration, you download a discovery ISO and then boot cluster machines with that image. You can install clusters with the Assisted Installer on Nutanix, vSphere, and bare metal with full integration, and other platforms without integration. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. To deploy clusters with the Agent-based Installer, you can download the Agent-based Installer first. You can then configure the cluster and generate a discovery image. You boot cluster machines with the discovery image, which installs an agent that communicates with the installation program and handles the provisioning for you instead of you interacting with the installation program or setting up a provisioner machine yourself. You must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. This approach is ideal for disconnected environments. For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster, except if you install on bare metal. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. For the installation program, the program uses three sets of files during installation: an installation configuration file that is named install-config.yaml , Kubernetes manifests, and Ignition config files for your machine types. Important You can modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support. The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster. The installation configuration files are all pruned when you run the installation program, so be sure to back up all the configuration files that you want to use again. Important You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation. The installation process with the Assisted Installer Installation with the Assisted Installer involves creating a cluster configuration interactively by using the web-based user interface or the RESTful API. The Assisted Installer user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The Assisted Installer generates a discovery image, which you download and use to boot the cluster machines. The image installs RHCOS and an agent, and the agent handles the provisioning for you. You can install OpenShift Container Platform with the Assisted Installer and full integration on Nutanix, vSphere, and bare metal. Additionally, you can install OpenShift Container Platform with the Assisted Installer on other platforms without integration. OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. If possible, use the Assisted Installer feature to avoid having to download and configure the Agent-based Installer. The installation process with Agent-based infrastructure Agent-based installation is similar to using the Assisted Installer, except that you must initially download and install the Agent-based Installer . An Agent-based installation is useful when you want the convenience of the Assisted Installer, but you need to install a cluster in a disconnected environment. If possible, use the Agent-based installation feature to avoid having to create a provisioner machine with a bootstrap VM, and then provision and maintain the cluster infrastructure. The installation process with installer-provisioned infrastructure The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster. You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. The installation process with user-provisioned infrastructure You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself. The following list details some of these self-managed resources: The underlying infrastructure for the control plane and compute machines that make up the cluster Load balancers Cluster networking, including the DNS records and required subnets Storage for the cluster infrastructure and applications If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster. Installation process details When a cluster is provisioned, each machine in the cluster requires information about the cluster. OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. The temporary bootstrap machine boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: Figure 3.2. Creating the bootstrap, control plane, and compute machines After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Consider using Ignition config files within 12 hours after they are generated, because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Bootstrapping a cluster involves the following steps: The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. If you provision the infrastructure, this step requires manual intervention. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. If you provision the infrastructure, this step requires manual intervention. The temporary control plane schedules the production control plane to the production control plane machines. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine injects OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. If you provision the infrastructure, this step requires manual intervention. The control plane sets up the compute nodes. The control plane installs additional services in the form of a set of Operators. The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operations, including the creation of compute machines in supported environments. Installation scope The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes. Additional resources See Available cluster customizations for details about OpenShift Container Platform configuration resources. 3.2. About the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). It provides a graph, or diagram, that contains the vertices of component Operators and the edges that connect them. The edges in the graph show which versions you can safely update to. The vertices are update payloads that specify the intended state of the managed cluster components. The Cluster Version Operator (CVO) in your cluster checks with the OpenShift Update Service to see the valid updates and update paths based on current component versions and information in the graph. When you request an update, the CVO uses the corresponding release image to update your cluster. The release artifacts are hosted in Quay as container images. To allow the OpenShift Update Service to provide only compatible updates, a release verification pipeline drives automation. Each release artifact is verified for compatibility with supported cloud platforms and system architectures, as well as other component packages. After the pipeline confirms the suitability of a release, the OpenShift Update Service notifies you that it is available. Important The OpenShift Update Service displays all recommended updates for your current cluster. If an update path is not recommended by the OpenShift Update Service, it might be because of a known issue with the update or the target release. Two controllers run during continuous update mode. The first controller continuously updates the payload manifests, applies the manifests to the cluster, and outputs the controlled rollout status of the Operators to indicate whether they are available, upgrading, or failed. The second controller polls the OpenShift Update Service to determine if updates are available. Important Only updating to a newer version is supported. Reverting or rolling back your cluster to a version is not supported. If your update fails, contact Red Hat support. During the update process, the Machine Config Operator (MCO) applies the new configuration to your cluster machines. The MCO cordons the number of nodes specified by the maxUnavailable field on the machine configuration pool and marks them unavailable. By default, this value is set to 1 . The MCO updates the affected nodes alphabetically by zone, based on the topology.kubernetes.io/zone label. If a zone has more than one node, the oldest nodes are updated first. For nodes that do not use zones, such as in bare metal deployments, the nodes are updated by age, with the oldest nodes updated first. The MCO updates the number of nodes as specified by the maxUnavailable field on the machine configuration pool at a time. The MCO then applies the new configuration and reboots the machine. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. If you use Red Hat Enterprise Linux (RHEL) machines as workers, the MCO does not update the kubelet because you must update the OpenShift API on the machines first. With the specification for the new version applied to the old kubelet, the RHEL machine cannot return to the Ready state. You cannot complete the update until the machines are available. However, the maximum number of unavailable nodes is set to ensure that normal cluster operations can continue with that number of machines out of service. The OpenShift Update Service is composed of an Operator and one or more application instances. 3.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. 3.4. steps Selecting a cluster installation method and preparing it for users
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/architecture/architecture-installation
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.20/proc-providing-feedback-on-redhat-documentation
Chapter 1. Overview
Chapter 1. Overview Installer and image creation In RHEL 8.2, you can register your system, attach RHEL subscriptions, and install from the Red Hat Content Delivery Network (CDN) before package installation. You can also register your system to Red Hat Insights during installation. Interactive GUI installations, as well as automated Kickstart installations, support these new features. For more information, see Section 5.1.1, "Installer and image creation" . Infrastructure services The Tuned system tuning tool has been rebased to version 2.13, which adds support for architecture-dependent tuning and multiple include directives. For more information, see Section 5.1.4, "Infrastructure services" . Security System-wide cryptographic policies now support customization . The administrator can now define a complete policy or modify only certain values. RHEL 8.2 includes the setools-gui and setools-console-analyses packages that provide tools for SELinux-policy analysis and data-flow inspections. SCAP Security Guide now provides a profile compliant with the Australian Cyber Security Centre (ACSC) Essential Eight Maturity Model. See Section 5.1.5, "Security" for more information. Dynamic programming languages, web and database servers Later versions of the following components are now available as new module streams: Python 3.8 Maven 3.6 See Section 5.1.10, "Dynamic programming languages, web and database servers" for details. Compiler toolsets The following compiler toolsets have been updated in RHEL 8.2: GCC Toolset 9 Clang and LLVM Toolset 9.0.1 Rust Toolset 1.41 Go Toolset 1.13 See Section 5.1.11, "Compilers and development tools" for more information. Identity Management Identity Management introduces a new command-line tool: Healthcheck . Healthcheck helps users find problems that might impact the fitness of their IdM environments. Identity Management now supports Ansible roles and modules for installation and management. This update makes installation and configuration of IdM-based solutions easier. See Section 5.1.12, "Identity Management" for more information. The web console The web console has been redesigned to use the PatternFly 4 user interface system design. A session timeout has been added to the web console to improve security. See Section 5.1.15, "The web console" for more information. Desktop Workspace switcher in the GNOME Classic environment has been modified. The switcher is now located in the right part of the bottom bar, and it is designed as a horizontal strip of thumbnails. Switching between workspaces is possible by clicking on the required thumbnail. The Direct Rendering Manager (DRM) kernel graphics subsystem has been rebased to upstream Linux kernel version 5.3. This version provides a number of enhancements over the version, including support for new GPUs and APUs, and various driver updates. In-place upgrade In-place upgrade from RHEL 7 to RHEL 8 The supported in-place upgrade path is: From RHEL 7.9 to RHEL 8.2 on the 64-bit Intel, IBM POWER 8 (little endian), and IBM Z architectures From RHEL 7.6 to RHEL 8.2 on architectures that require kernel version 4.14: 64-bit ARM, IBM POWER 9 (little endian), and IBM Z (Structure A). Note that these architectures remain fully supported in RHEL 7 but no longer receive minor release updates since RHEL 7.7. For more information, see Supported in-place upgrade paths for Red Hat Enterprise Linux . For instructions on performing an in-place upgrade, see Upgrading from RHEL 7 to RHEL 8 . Notable enhancements include: You can now use additional custom repositories for an in-place upgrade from RHEL 7 to RHEL 8. It is also possible to upgrade without Red Hat Subscription Manager. You can create your own actors to migrate your custom or third-party applications using the Leapp utility. For details, see Customizing your Red Hat Enterprise Linux in-place upgrade . If you are using CentOS Linux 7 or Oracle Linux 7, you can convert your operating system to RHEL 7 using the supported convert2rhel utility prior to upgrading to RHEL 8. For instructions, see Converting from an RPM-based Linux distribution to RHEL . In-place upgrade from RHEL 6 to RHEL 8 To upgrade from RHEL 6.10 to RHEL 8.2, follow instructions in Upgrading from RHEL 6 to RHEL 8 . If you are using CentOS Linux 6 or Oracle Linux 6, you can convert your operating system to RHEL 6 using the unsupported convert2rhel utility prior to upgrading to RHEL 8. For instructions, see How to convert from CentOS Linux or Oracle Linux to RHEL. Additional resources Capabilities and limits of Red Hat Enterprise Linux 8 as compared to other versions of the system are available in the Knowledgebase article Red Hat Enterprise Linux technology capabilities and limits . Information regarding the Red Hat Enterprise Linux life cycle is provided in the Red Hat Enterprise Linux Life Cycle document. The Package manifest document provides a package listing for RHEL 8. Major differences between RHEL 7 and RHEL 8 are documented in Considerations in adopting RHEL 8 . Instructions on how to perform an in-place upgrade from RHEL 7 to RHEL 8 are provided by the document Upgrading from RHEL 7 to RHEL 8 . The Red Hat Insights service, which enables you to proactively identify, examine, and resolve known technical issues, is now available with all RHEL subscriptions. For instructions on how to install the Red Hat Insights client and register your system to the service, see the Red Hat Insights Get Started page. Red Hat Customer Portal Labs Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at https://access.redhat.com/labs/ . The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are: Registration Assistant Product Life Cycle Checker Kickstart Generator Kickstart Converter Red Hat Satellite Upgrade Helper Red Hat Code Browser JVM Options Configuration Tool Red Hat CVE Checker Red Hat Product Certificates Load Balancer Configuration Tool Yum Repository Configuration Helper Red Hat Out of Memory Analyzer
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.2_release_notes/overview
5.4. GFS2 File System Does Not Mount on Newly-Added Cluster Node
5.4. GFS2 File System Does Not Mount on Newly-Added Cluster Node If you add a new node to a cluster and you find that you cannot mount your GFS2 file system on that node, you may have fewer journals on the GFS2 file system than you have nodes attempting to access the GFS2 file system. You must have one journal per GFS2 host you intend to mount the file system on (with the exception of GFS2 file systems mounted with the spectator mount option set, since these do not require a journal). You can add journals to a GFS2 file system with the gfs2_jadd command, as described in Section 4.7, "Adding Journals to a File System" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-nogfs2mount
Chapter 13. Introduction
Chapter 13. Introduction Abstract This chapter provides an overview of all the expression languages supported by Apache Camel. 13.1. Overview of the Languages Table of expression and predicate languages Table 13.1, "Expression and Predicate Languages" gives an overview of the different syntaxes for invoking expression and predicate languages. Table 13.1. Expression and Predicate Languages Language Static Method Fluent DSL Method XML Element Annotation Artifact See Bean Integration in the Apache Camel Development Guide on the customer portal. bean() EIP ().method() method @Bean Camel core Chapter 14, Constant constant() EIP ().constant() constant @Constant Camel core Chapter 15, EL el() EIP ().el() el @EL camel-juel Chapter 17, Groovy groovy() EIP ().groovy() groovy @Groovy camel-groovy Chapter 18, Header header() EIP ().header() header @Header Camel core Chapter 19, JavaScript javaScript() EIP ().javaScript() javaScript @JavaScript camel-script Chapter 20, JoSQL sql() EIP ().sql() sql @SQL camel-josql Chapter 21, JsonPath None EIP ().jsonpath() jsonpath @JsonPath camel-jsonpath Chapter 22, JXPath None EIP ().jxpath() jxpath @JXPath camel-jxpath Chapter 23, MVEL mvel() EIP ().mvel() mvel @MVEL camel-mvel Chapter 24, The Object-Graph Navigation Language(OGNL) ognl() EIP ().ognl() ognl @OGNL camel-ognl Chapter 25, PHP (DEPRECATED) php() EIP ().php() php @PHP camel-script Chapter 26, Exchange Property property() EIP ().property() property @Property Camel core Chapter 27, Python (DEPRECATED) python() EIP ().python() python @Python camel-script Chapter 28, Ref ref() EIP ().ref() ref N/A Camel core Chapter 29, Ruby (DEPRECATED) ruby() EIP ().ruby() ruby @Ruby camel-script Chapter 30, The Simple Language / Chapter 16, The File Language simple() EIP ().simple() simple @Simple Camel core Chapter 31, SpEL spel() EIP ().spel() spel @SpEL camel-spring Chapter 32, The XPath Language xpath() EIP ().xpath() xpath @XPath Camel core Chapter 33, XQuery xquery() EIP ().xquery() xquery @XQuery camel-saxon 13.2. How to Invoke an Expression Language Prerequisites Before you can use a particular expression language, you must ensure that the required JAR files are available on the classpath. If the language you want to use is not included in the Apache Camel core, you must add the relevant JARs to your classpath. If you are using the Maven build system, you can modify the build-time classpath simply by adding the relevant dependency to your POM file. For example, if you want to use the Ruby language, add the following dependency to your POM file: If you are going to deploy your application in a Red Hat Fuse OSGi container, you also need to ensure that the relevant language features are installed (features are named after the corresponding Maven artifact). For example, to use the Groovy language in the OSGi container, you must first install the camel-groovy feature by entering the following OSGi console command: Note If you are using an expression or predicate in the routes, refer the value as an external resource by using resource:classpath:path or resource:file:path . For example, resource:classpath:com/foo/myscript.groovy . Camel on EAP deployment The camel-groovy component is supported by the Camel on EAP (Wildfly Camel) framework, which offers a simplified deployment model on the Red Hat JBoss Enterprise Application Platform (JBoss EAP) container. Approaches to invoking As shown in Table 13.1, "Expression and Predicate Languages" , there are several different syntaxes for invoking an expression language, depending on the context in which it is used. You can invoke an expression language: As a static method As a fluent DSL method As an XML element As an annotation As a static method Most of the languages define a static method that can be used in any context where an org.apache.camel.Expression type or an org.apache.camel.Predicate type is expected. The static method takes a string expression (or predicate) as its argument and returns an Expression object (which is usually also a Predicate object). For example, to implement a content-based router that processes messages in XML format, you could route messages based on the value of the /order/address/countryCode element, as follows: As a fluent DSL method The Java fluent DSL supports another style of invoking expression languages. Instead of providing the expression as an argument to an Enterprise Integration Pattern (EIP), you can provide the expression as a sub-clause of the DSL command. For example, instead of invoking an XPath expression as filter(xpath(" Expression ")) , you can invoke the expression as, filter().xpath(" Expression ") . For example, the preceding content-based router can be re-implemented in this style of invocation, as follows: As an XML element You can also invoke an expression language in XML, by putting the expression string inside the relevant XML element. For example, the XML element for invoking XPath in XML is xpath (which belongs to the standard Apache Camel namespace). You can use XPath expressions in a XML DSL content-based router, as follows: Alternatively, you can specify a language expression using the language element, where you specify the name of the language in the language attribute. For example, you can define an XPath expression using the language element as follows: As an annotation Language annotations are used in the context of bean integration . The annotations provide a convenient way of extracting information from a message or header and then injecting the extracted data into a bean's method parmeters. For example, consider the bean, myBeanProc , which is invoked as a predicate of the filter() EIP. If the bean's checkCredentials method returns true , the message is allowed to proceed; but if the method returns false , the message is blocked by the filter. The filter pattern is implemented as follows: The implementation of the MyBeanProcessor class exploits the @XPath annotation to extract the username and password from the underlying XML message, as follows: The @XPath annotation is placed just before the parameter into which it gets injected. Notice how the XPath expression explicitly selects the text node, by appending /text() to the path, which ensures that just the content of the element is selected, not the enclosing tags. As a Camel endpoint URI Using the Camel Language component, you can invoke a supported language in an endpoint URI. There are two alternative syntaxes. To invoke a language script stored in a file (or other resource type defined by Scheme ), use the following URI syntax: Where the scheme can be file: , classpath: , or http: . For example, the following route executes the mysimplescript.txt from the classpath: To invoke an embedded language script, use the following URI syntax: For example, to run the Simple language script stored in the script string: For more details about the Language component, see Language in the Apache Camel Component Reference Guide .
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-groovy</artifactId> <!-- Use the same version as your Camel core version --> <version>USD{camel.version}</version> </dependency>", "karaf@root> features:install camel-groovy", "from(\" SourceURL \") .choice .when( xpath(\"/order/address/countryCode = 'us'\") ) .to(\"file://countries/us/\") .when( xpath(\"/order/address/countryCode = 'uk'\") ) .to(\"file://countries/uk/\") .otherwise() .to(\"file://countries/other/\") .to(\" TargetURL \");", "from(\" SourceURL \") .choice .when(). xpath(\"/order/address/countryCode = 'us'\") .to(\"file://countries/us/\") .when(). xpath(\"/order/address/countryCode = 'uk'\") .to(\"file://countries/uk/\") .otherwise() .to(\"file://countries/other/\") .to(\" TargetURL \");", "<from uri=\"file://input/orders\"/> <choice> <when> <xpath>/order/address/countryCode = 'us'</xpath> <to uri=\"file://countries/us/\"/> </when> <when> <xpath>/order/address/countryCode = 'uk'</xpath> <to uri=\"file://countries/uk/\"/> </when> <otherwise> <to uri=\"file://countries/other/\"/> </otherwise> </choice>", "<language language=\"xpath\">/order/address/countryCode = 'us'</language>", "// Java MyBeanProcessor myBeanProc = new MyBeanProcessor(); from(\" SourceURL \") .filter().method(myBeanProc, \"checkCredentials\") .to(\" TargetURL \");", "// Java import org.apache.camel.language.XPath; public class MyBeanProcessor { boolean void checkCredentials( @XPath(\"/credentials/username/text()\") String user, @XPath(\"/credentials/password/text()\") String pass ) { // Check the user/pass credentials } }", "language:// LanguageName :resource: Scheme : Location [? Options ]", "from(\"direct:start\") .to(\"language:simple:classpath:org/apache/camel/component/language/mysimplescript.txt\") .to(\"mock:result\");", "language:// LanguageName [: Script ][? Options ]", "String script = URLEncoder.encode(\"Hello USD{body}\", \"UTF-8\"); from(\"direct:start\") .to(\"language:simple:\" + script) .to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/Intro
Chapter 4. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1]
Chapter 4. MutatingWebhookConfiguration [admissionregistration.k8s.io/v1] Description MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata . webhooks array Webhooks is a list of webhooks and the affected resources and operations. webhooks[] object MutatingWebhook describes an admission webhook and the resources and operations it applies to. 4.1.1. .webhooks Description Webhooks is a list of webhooks and the affected resources and operations. Type array 4.1.2. .webhooks[] Description MutatingWebhook describes an admission webhook and the resources and operations it applies to. Type object Required name clientConfig sideEffects admissionReviewVersions Property Type Description admissionReviewVersions array (string) AdmissionReviewVersions is an ordered list of preferred AdmissionReview versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy. clientConfig object WebhookClientConfig contains the information to make a TLS connection with the webhook failurePolicy string FailurePolicy defines how unrecognized errors from the admission endpoint are handled - allowed values are Ignore or Fail. Defaults to Fail. Possible enum values: - "Fail" means that an error calling the webhook causes the admission to fail. - "Ignore" means that an error calling the webhook is ignored. matchConditions array MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped. 2. If ALL matchConditions evaluate to TRUE, the webhook is called. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the error is ignored and the webhook is skipped This is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate. matchConditions[] object MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. matchPolicy string matchPolicy defines how the "rules" list is used to match incoming requests. Allowed values are "Exact" or "Equivalent". - Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, but "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the webhook. - Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, and "rules" only included apiGroups:["apps"], apiVersions:["v1"], resources: ["deployments"] , a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the webhook. Defaults to "Equivalent" Possible enum values: - "Equivalent" means requests should be sent to the webhook if they modify a resource listed in rules via another API group or version. - "Exact" means requests should only be sent to the webhook if they exactly match a given rule. name string The name of the admission webhook. Name should be fully qualified, e.g., imagepolicy.kubernetes.io, where "imagepolicy" is the name of the webhook, and kubernetes.io is the name of the organization. Required. namespaceSelector LabelSelector NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the webhook. For example, to run the webhook on any objects whose namespace is not associated with "runlevel" of "0" or "1"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "runlevel", "operator": "NotIn", "values": [ "0", "1" ] } ] } If instead you want to only run the webhook on any objects whose namespace is associated with the "environment" of "prod" or "staging"; you will set the selector as follows: "namespaceSelector": { "matchExpressions": [ { "key": "environment", "operator": "In", "values": [ "prod", "staging" ] } ] } See https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ for more examples of label selectors. Default to the empty LabelSelector, which matches everything. objectSelector LabelSelector ObjectSelector decides whether to run the webhook based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the webhook, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything. reinvocationPolicy string reinvocationPolicy indicates whether this webhook should be called multiple times as part of a single admission evaluation. Allowed values are "Never" and "IfNeeded". Never: the webhook will not be called more than once in a single admission evaluation. IfNeeded: the webhook will be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call. Webhooks that specify this option must be idempotent, able to process objects they previously admitted. Note: * the number of additional invocations is not guaranteed to be exactly one. * if additional invocations result in further modifications to the object, webhooks are not guaranteed to be invoked again. * webhooks that use this option may be reordered to minimize the number of additional invocations. * to validate an object after all mutations are guaranteed complete, use a validating admission webhook instead. Defaults to "Never". Possible enum values: - "IfNeeded" indicates that the webhook may be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call. - "Never" indicates that the webhook must not be called more than once in a single admission evaluation. rules array Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. rules[] object RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. sideEffects string SideEffects states whether this webhook has side effects. Acceptable values are: None, NoneOnDryRun (webhooks created via v1beta1 may also specify Some or Unknown). Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission chain and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some. Possible enum values: - "None" means that calling the webhook will have no side effects. - "NoneOnDryRun" means that calling the webhook will possibly have side effects, but if the request being reviewed has the dry-run attribute, the side effects will be suppressed. - "Some" means that calling the webhook will possibly have side effects. If a request with the dry-run attribute would trigger a call to this webhook, the request will instead fail. - "Unknown" means that no information is known about the side effects of calling the webhook. If a request with the dry-run attribute would trigger a call to this webhook, the request will instead fail. timeoutSeconds integer TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 10 seconds. 4.1.3. .webhooks[].clientConfig Description WebhookClientConfig contains the information to make a TLS connection with the webhook Type object Property Type Description caBundle string caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used. service object ServiceReference holds a reference to Service.legacy.k8s.io url string url gives the location of the webhook, in standard URL form ( scheme://host:port/path ). Exactly one of url or service must be specified. The host should not refer to a service running in the cluster; use the service field instead. The host might be resolved via external DNS in some apiservers (e.g., kube-apiserver cannot resolve in-cluster DNS as that would be a layering violation). host may also be an IP address. Please note that using localhost or 127.0.0.1 as a host is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster. The scheme must be "https"; the URL must begin with "https://". A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier. Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#... ") and query parameters ("?... ") are not allowed, either. 4.1.4. .webhooks[].clientConfig.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Required namespace name Property Type Description name string name is the name of the service. Required namespace string namespace is the namespace of the service. Required path string path is an optional URL path which will be sent in any request to this service. port integer If specified, the port on the service that hosting webhook. Default to 443 for backward compatibility. port should be a valid port number (1-65535, inclusive). 4.1.5. .webhooks[].matchConditions Description MatchConditions is a list of conditions that must be met for a request to be sent to this webhook. Match conditions filter requests that have already been matched by the rules, namespaceSelector, and objectSelector. An empty list of matchConditions matches all requests. There are a maximum of 64 match conditions allowed. The exact matching logic is (in order): 1. If ANY matchCondition evaluates to FALSE, the webhook is skipped. 2. If ALL matchConditions evaluate to TRUE, the webhook is called. 3. If any matchCondition evaluates to an error (but none are FALSE): - If failurePolicy=Fail, reject the request - If failurePolicy=Ignore, the error is ignored and the webhook is skipped This is a beta feature and managed by the AdmissionWebhookMatchConditions feature gate. Type array 4.1.6. .webhooks[].matchConditions[] Description MatchCondition represents a condition which must by fulfilled for a request to be sent to a webhook. Type object Required name expression Property Type Description expression string Expression represents the expression which will be evaluated by CEL. Must evaluate to bool. CEL expressions have access to the contents of the AdmissionRequest and Authorizer, organized into CEL variables: 'object' - The object from the incoming request. The value is null for DELETE requests. 'oldObject' - The existing object. The value is null for CREATE requests. 'request' - Attributes of the admission request(/pkg/apis/admission/types.go#AdmissionRequest). 'authorizer' - A CEL Authorizer. May be used to perform authorization checks for the principal (user or service account) of the request. See https://pkg.go.dev/k8s.io/apiserver/pkg/cel/library#Authz 'authorizer.requestResource' - A CEL ResourceCheck constructed from the 'authorizer' and configured with the request resource. Documentation on CEL: https://kubernetes.io/docs/reference/using-api/cel/ Required. name string Name is an identifier for this match condition, used for strategic merging of MatchConditions, as well as providing an identifier for logging purposes. A good name should be descriptive of the associated expression. Name must be a qualified name consisting of alphanumeric characters, '-', ' ' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9 .]*)?[A-Za-z0-9]') with an optional DNS subdomain prefix and '/' (e.g. 'example.com/MyName') Required. 4.1.7. .webhooks[].rules Description Rules describes what operations on what resources/subresources the webhook cares about. The webhook cares about an operation if it matches any Rule. However, in order to prevent ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks from putting the cluster in a state which cannot be recovered from without completely disabling the plugin, ValidatingAdmissionWebhooks and MutatingAdmissionWebhooks are never called on admission requests for ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects. Type array 4.1.8. .webhooks[].rules[] Description RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. Type object Property Type Description apiGroups array (string) APIGroups is the API groups the resources belong to. ' ' is all groups. If ' ' is present, the length of the slice must be one. Required. apiVersions array (string) APIVersions is the API versions the resources belong to. ' ' is all versions. If ' ' is present, the length of the slice must be one. Required. operations array (string) Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. resources array (string) Resources is a list of resources this rule applies to. For example: 'pods' means pods. 'pods/log' means the log subresource of pods. ' ' means all resources, but not subresources. 'pods/ ' means all subresources of pods. ' /scale' means all scale subresources. ' /*' means all resources and their subresources. If wildcard is present, the validation rule will ensure resources do not overlap with each other. Depending on the enclosing object, subresources might not be allowed. Required. scope string scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and " " "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. " " means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". 4.2. API endpoints The following API endpoints are available: /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations DELETE : delete collection of MutatingWebhookConfiguration GET : list or watch objects of kind MutatingWebhookConfiguration POST : create a MutatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations GET : watch individual changes to a list of MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} DELETE : delete a MutatingWebhookConfiguration GET : read the specified MutatingWebhookConfiguration PATCH : partially update the specified MutatingWebhookConfiguration PUT : replace the specified MutatingWebhookConfiguration /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations/{name} GET : watch changes to an object of kind MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations HTTP method DELETE Description delete collection of MutatingWebhookConfiguration Table 4.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind MutatingWebhookConfiguration Table 4.3. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfigurationList schema 401 - Unauthorized Empty HTTP method POST Description create a MutatingWebhookConfiguration Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body MutatingWebhookConfiguration schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 202 - Accepted MutatingWebhookConfiguration schema 401 - Unauthorized Empty 4.2.2. /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations HTTP method GET Description watch individual changes to a list of MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead. Table 4.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations/{name} Table 4.8. Global path parameters Parameter Type Description name string name of the MutatingWebhookConfiguration HTTP method DELETE Description delete a MutatingWebhookConfiguration Table 4.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified MutatingWebhookConfiguration Table 4.11. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified MutatingWebhookConfiguration Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified MutatingWebhookConfiguration Table 4.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.15. Body parameters Parameter Type Description body MutatingWebhookConfiguration schema Table 4.16. HTTP responses HTTP code Reponse body 200 - OK MutatingWebhookConfiguration schema 201 - Created MutatingWebhookConfiguration schema 401 - Unauthorized Empty 4.2.4. /apis/admissionregistration.k8s.io/v1/watch/mutatingwebhookconfigurations/{name} Table 4.17. Global path parameters Parameter Type Description name string name of the MutatingWebhookConfiguration HTTP method GET Description watch changes to an object of kind MutatingWebhookConfiguration. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/extension_apis/mutatingwebhookconfiguration-admissionregistration-k8s-io-v1
Chapter 2. Creating a realm and a user
Chapter 2. Creating a realm and a user The first use of the Red Hat Single Sign-On admin console is to create a realm and create a user in that realm. You use that user to log in to your new realm and visit the built-in account console, to which all users have access. 2.1. Realms and users When you log in to the admin console, you work in a realm, which is a space where you manage objects. Two types of realms exist: Master realm - This realm was created for you when you first started Red Hat Single Sign-On. It contains the admin account you created at the first login. You use this realm only to create other realms. Other realms - These realms are created by the admin in the master realm. In these realms, administrators create users and applications. The applications are owned by the users. 2.2. Creating a realm As the admin in the master realm, you create the realms where administrators create users and applications. Prerequisites Red Hat Single Sign-On is installed. You have the initial admin account for the admin console. Procedure Go to http://localhost:8080/auth/admin/ and log in to the Red Hat Single Sign-On admin console using the admin account. From the Master menu, click Add Realm . When you are logged in to the master realm, this menu lists all other realms. Type demo in the Name field. A new realm Note The realm name is case-sensitive, so make note of the case that you use. Click Create . The main admin console page opens with realm set to demo . Demo realm Switch between managing the master realm and the realm you just created by clicking entries in the Select realm drop-down list. 2.3. Creating a user In the demo realm, you create a new user and a temporary password for that new user. Procedure From the menu, click Users to open the user list page. On the right side of the empty user list, click Add User to open the Add user page. Enter a name in the Username field. This is the only required field. Add user page Flip the Email Verified switch to On and click Save . The management page for the new user opens. Click the Credentials tab to set a temporary password for the new user. Type a new password and confirm it. Click Set Password to set the user password to the new one you specified. Manage Credentials page Note This password is temporary and the user will be required to change it at the first login. If you prefer to create a password that is persistent, flip the Temporary switch to Off and click Set Password . 2.4. Logging into the Account Console Every user in a realm has access to the account console. You use this console to update your profile information and change your credentials. You can now test logging in with that user in the realm that you created. Procedure Log out of the admin console by opening the user menu and selecting Sign Out . Go to http://localhost:8080/auth/realms/demo/account and log in to your demo realm as the user that you just created. When you are asked to supply a new password, enter a password that you can remember. Update password The account console opens for this user. Account console Complete the required fields with any values to test using this page. steps You are now ready for the final procedure, which is to secure a sample application that runs on JBoss EAP. See Securing a sample application .
null
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/getting_started_guide/creating-first-realm_
Data Security and Hardening Guide
Data Security and Hardening Guide Red Hat Ceph Storage 4 Red Hat Ceph Storage Data Security and Hardening Guide Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/data_security_and_hardening_guide/index
D.11. Toolbar Menu
D.11. Toolbar Menu Click the upside-down triangle icon to open the View Menu icon to see various options including sorting, filtering, displayed columns and much more.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/toolbar_menu
Chapter 10. Thread synchronization mechanisms in RHEL for Real Time
Chapter 10. Thread synchronization mechanisms in RHEL for Real Time In real-time, when two or more threads need access to a shared resource at the same time, the threads coordinate using the thread synchronization mechanism. Thread synchronization ensures that only one thread uses the shared resource at a time. The three thread synchronization mechanisms used on Linux: Mutexes, Barriers, and Condition variables ( condvars ). 10.1. Mutexes Mutex derives from the terms mutual exclusion. The mutual exclusion object synchronizes access to a resource. It is a mechanism that ensures only one thread can acquire a mutex at a time. The mutex algorithm creates a serial access to each section of code, so that only one thread executes the code at any one time. Mutexes are created using an attribute object known as the mutex attribute object. It is an abstract object, which contains several attributes that depends on the POSIX options you choose to implement. The attribute object is defined with the pthread_mutex_t variable. The object stores the attributes defined for the mutex. The pthread_mutex_init(&my_mutex, &my_mutex_attr) , pthread_mutexattr_setrobust() and pthread_mutexattr_getrobust() functions return 0, when successful. On error, they return the error number. In real-time, you can either retain the attribute object to initialize more mutexes of the same type or you can clean up (destroy) the attribute object. The mutex is not affected in either case. Mutexes include the standard and advanced type of mutexes. Standard mutexes The real-time standard mutexes are private, non-recursive, non-robust, and non-priority inheritance capable mutexes. Initializing a pthread_mutex_t using pthread_mutex_init(&my_mutex, &my_mutex_attr) creates a standard mutex. When using the standard mutex type, your application may not benefit from the advantages provided by the pthreads API and the RHEL for Real Time kernel. Advanced mutexes Mutexes defined with additional capabilities are called advanced mutexes. Advanced capabilities include priority inheritance, robust behavior of a mutex, and shared and private mutexes. For example, for robust mutexes, initializing the pthread_mutexattr_setrobust() function, sets the robust attribute. Similarly, using the attribute PTHREAD_PROCESS_SHARED , allows any thread to operate on the mutex, provided the thread has access to its allocated memory. The attribute PTHREAD_PROCESS_PRIVATE sets a private mutex. A non-robust mutex does not release automatically and stays locked until you manually release it. Additional resources futex(7) man page on your system pthread_mutex_destroy(P) man page on your system 10.2. Barriers Barriers operate in a very different way when compared to other thread synchronization methods. The barriers define a point in the code where all active threads stop until all threads and processes reach this barrier. Barriers are used in situations when a running application needs to ensure that all threads have completed specific tasks before execution can continue. The barrier mutex in real-time, take following two variables: The first variable records the stop and pass state of the barrier. The second variable records the total number of threads that enter the barrier. The barrier sets the state to pass only when the specified number of threads reach the defined barrier. When the barrier state is set to pass , the threads and processes proceed further. The pthread_barrier_init() function allocates the required resources to use the defined barrier and initializes it with the attributes referenced by the attr attribute object. The pthread_barrier_init() and pthread_barrier_destroy() functions return the zero value, when successful. On error, they return an error number. 10.3. Condition variables In real-time, condition variables ( condvar ) is a POSIX thread construct that waits for a particular condition to be achieved before proceeding. In general, the signaled condition relates to the state of data that the thread shares with another thread. For example, a condvar can be used to signal a data entry into a processing queue and a thread waiting to process that data from the queue. Using the pthread_cond_init() function, you can initialize a condition variable. The pthread_cond_init() , pthread_cond_wait() , and pthread_cond_signal() functions return the zero value, when successful. On error, it returns the error number. 10.4. Mutex classes The mentioned mutex options provides guidance on the mutex classes to consider when writing or porting an application. Table 10.1. Mutex options Advanced mutexes Description Shared mutexes Defines shared access for multiple threads to acquire a mutex at a given time. Shared mutexes can create latency. The attribute is PTHREAD_PROCESS_SHARED . Private mutexes Ensures that only the threads created within the same process can access the mutex. The attribute is PTHREAD_PROCESS_PRIVATE . Real-time priority inheritance Sets the priority level of the lower priority task higher above a current higher priority task. When the task completes, it releases the resource and the task drops back to its original priority permitting the higher priority task to run. The attribute is PTHREAD_PRIO_INHERIT . Robust mutexes Sets the robust mutexes to release automatically when the owning thread would stop. The value substring NP in the string PTHREAD_MUTEX_ROBUST_NP , indicates that robust mutexes are non-POSIX or not portable Additional resources futex(7) man page on your system 10.5. Thread synchronization functions The mentioned list of function types and the description provides information on the functions to use for thread synchronization mechanisms for the real-time kernel. Table 10.2. Functions Function Description pthread_mutexattr_init(&my_mutex_attr) Initiates a mutex with attributes specified by attr . If attr is NULL, it applies the default mutex attributes. pthread_mutexattr_destroy(&my_mutex_attr) Destroys the specified mutex object. You can re-initialize with pthread_mutex_init() . pthread_mutexattr_setrobust() Specifies the PTHREAD_MUTEX_ROBUST attribute of a mutex. The PTHREAD_MUTEX_ROBUST attribute defines a thread that can stop without unlocking the mutex. A future call to own this mutex succeeds automatically and returns the value EOWNERDEAD to indicate that the mutex owner no longer exists. pthread_mutexattr_getrobust() Queries the PTHREAD_MUTEX_ROBUST attribute of a mutex. pthread_barrier_init() Allocates the required resources to use and initialize the barrier with attribute object attr . If attr is NULL, it applies the default values. pthread_cond_init() Initializes a condition variable. The cond argument defines the object to initiate with the attributes in the condition variable attribute object attr . If attr is NULL, it applies the default values. pthread_cond_wait() Blocks a thread execution until it receives a signal from another thread. In addition, a call to this function also releases the associated lock on mutex before blocking. The argument cond defines the pthread_cond_t object for a thread to block on. The mutex argument specifies the mutex to unblock. pthread_cond_signal() Unblocks at least one of the threads that are blocked on a specified condition variable. The argument cond specifies using the pthread_cond_t object to unblock the thread.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/understanding_rhel_for_real_time/assembly_thread-synchronization-mechanisms-in-rhel-for-real-time_understanding-rhel-for-real-time-core-concepts
Chapter 4. Adopting Red Hat OpenStack Platform control plane services
Chapter 4. Adopting Red Hat OpenStack Platform control plane services Adopt your Red Hat OpenStack Platform 17.1 control plane services to deploy them in the Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 control plane. 4.1. Adopting the Identity service To adopt the Identity service (keystone), you patch an existing OpenStackControlPlane custom resource (CR) where the Identity service is disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment. Prerequisites Create the keystone secret that includes the Fernet keys that were copied from the RHOSP environment: Procedure Patch the OpenStackControlPlane CR to deploy the Identity service: Create an alias to use the openstack command in the Red Hat OpenStack Services on OpenShift (RHOSO) deployment: Remove services and endpoints that still point to the RHOSP control plane, excluding the Identity service and its endpoints: Verification Verify that you can access the OpenStackClient pod. For more information, see Accessing the OpenStackClient pod in Maintaining the Red Hat OpenStack Services on OpenShift deployment . Confirm that the Identity service endpoints are defined and are pointing to the control plane FQDNs: 4.2. Adopting the Key Manager service To adopt the Key Manager service (barbican), you patch an existing OpenStackControlPlane custom resource (CR) where Key Manager service is disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment. The Key Manager service adoption is complete if you see the following results: The BarbicanAPI , BarbicanWorker , and BarbicanKeystoneListener services are up and running. Keystone endpoints are updated, and the same crypto plugin of the source cloud is available. Note This procedure configures the Key Manager service to use the simple_crypto back end. Additional back ends, such as PKCS11 and DogTag, are currently not supported in Red Hat OpenStack Services on OpenShift (RHOSO). Procedure Add the kek secret: Patch the OpenStackControlPlane CR to deploy the Key Manager service: Verification Ensure that the Identity service (keystone) endpoints are defined and are pointing to the control plane FQDNs: Ensure that Barbican API service is registered in the Identity service: List the secrets: 4.3. Adopting the Networking service To adopt the Networking service (neutron), you patch an existing OpenStackControlPlane custom resource (CR) that has the Networking service disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment. The Networking service adoption is complete if you see the following results: The NeutronAPI service is running. The Identity service (keystone) endpoints are updated, and the same back end of the source cloud is available. Prerequisites Ensure that Single Node OpenShift or OpenShift Local is running in the Red Hat OpenShift Container Platform (RHOCP) cluster. Adopt the Identity service. For more information, see Adopting the Identity service . Migrate your OVN databases to ovsdb-server instances that run in the Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Migrating OVN data . Procedure Patch the OpenStackControlPlane CR to deploy the Networking service: Note: If neutron-dhcp-agent was used in the OSP-17.1 deployment and should still be used after adoption, please enable also dhcp_agent_notification for neutron-api service. This can be done with patch: Verification Inspect the resulting Networking service pods: Ensure that the Neutron API service is registered in the Identity service: Create sample resources so that you can test whether the user can create networks, subnets, ports, or routers: 4.4. Adopting the Object Storage service If you are using Object Storage as a service, adopt the Object Storage service (swift) to the Red Hat OpenStack Services on OpenShift (RHOSO) environment. If you are using the Object Storage API of the Ceph Object Gateway (RGW), skip the following procedure. Prerequisites The Object Storage service storage back-end services are running in the Red Hat OpenStack Platform (RHOSP) deployment. The storage network is properly configured on the Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Preparing Red Hat OpenShift Container Platform for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift . Procedure Create the swift-conf secret that includes the Object Storage service hash path suffix and prefix: USD oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: swift-conf namespace: openstack type: Opaque data: swift.conf: USD(USDCONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/swift/etc/swift/swift.conf | base64 -w0) EOF Create the swift-ring-files ConfigMap that includes the Object Storage service ring files: USD oc apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: swift-ring-files binaryData: swiftrings.tar.gz: USD(USDCONTROLLER1_SSH "cd /var/lib/config-data/puppet-generated/swift/etc/swift && tar cz *.builder *.ring.gz backups/ | base64 -w0") account.ring.gz: USD(USDCONTROLLER1_SSH "base64 -w0 /var/lib/config-data/puppet-generated/swift/etc/swift/account.ring.gz") container.ring.gz: USD(USDCONTROLLER1_SSH "base64 -w0 /var/lib/config-data/puppet-generated/swift/etc/swift/container.ring.gz") object.ring.gz: USD(USDCONTROLLER1_SSH "base64 -w0 /var/lib/config-data/puppet-generated/swift/etc/swift/object.ring.gz") EOF Patch the OpenStackControlPlane custom resource to deploy the Object Storage service: USD oc patch openstackcontrolplane openstack --type=merge --patch ' spec: swift: enabled: true template: memcachedInstance: memcached swiftRing: ringReplicas: 1 swiftStorage: replicas: 0 networkAttachments: - storage storageClass: local-storage 1 storageRequest: 10Gi swiftProxy: secret: osp-secret replicas: 1 passwordSelectors: service: SwiftPassword serviceUser: swift override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: 2 - storage ' 1 Must match the RHOSO deployment storage class. 2 Must match the network attachment for the Object Storage service configuration from the RHOSP deployment. Verification Inspect the resulting Object Storage service pods: Verify that the Object Storage proxy service is registered in the Identity service (keystone): Verify that you are able to upload and download objects: Note The Object Storage data is still stored on the existing RHOSP nodes. For more information about migrating the actual data from the RHOSP deployment to the RHOSO deployment, see Migrating the Object Storage service (swift) data from RHOSP to Red Hat OpenStack Services on OpenShift (RHOSO) nodes . 4.5. Adopting the Image service To adopt the Image Service (glance) you patch an existing OpenStackControlPlane custom resource (CR) that has the Image service disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment. The Image service adoption is complete if you see the following results: The GlanceAPI service up and running. The Identity service endpoints are updated, and the same back end of the source cloud is available. To complete the Image service adoption, ensure that your environment meets the following criteria: You have a running director environment (the source cloud). You have a Single Node OpenShift or OpenShift Local that is running in the Red Hat OpenShift Container Platform (RHOCP) cluster. Optional: You can reach an internal/external Ceph cluster by both crc and director. If you have image quotas in RHOSP 17.1, these quotas are transferred to Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 because the image quota system in 18.0 is disabled by default. For more information about enabling image quotas in 18.0, see Configuring image quotas in Customizing persistent storage . If you enable image quotas in RHOSO 18.0, the new quotas replace the legacy quotas from RHOSP 17.1. 4.5.1. Adopting the Image service that is deployed with a Object Storage service back end Adopt the Image Service (glance) that you deployed with an Object Storage service (swift) back end in the Red Hat OpenStack Platform (RHOSP) environment. The control plane glanceAPI instance is deployed with the following configuration. You use this configuration in the patch manifest that deploys the Image service with the object storage back end: Prerequisites You have completed the adoption steps. Procedure Create a new file, for example, glance_swift.patch , and include the following content: Note The Object Storage service as a back end establishes a dependency with the Image service. Any deployed GlanceAPI instances do not work if the Image service is configured with the Object Storage service that is not available in the OpenStackControlPlane custom resource. After the Object Storage service, and in particular SwiftProxy , is adopted, you can proceed with the GlanceAPI adoption. For more information, see Adopting the Object Storage service . Verify that SwiftProxy is available: Patch the GlanceAPI service that is deployed in the control plane: 4.5.2. Adopting the Image service that is deployed with a Block Storage service back end Adopt the Image Service (glance) that you deployed with a Block Storage service (cinder) back end in the Red Hat OpenStack Platform (RHOSP) environment. The control plane glanceAPI instance is deployed with the following configuration. You use this configuration in the patch manifest that deploys the Image service with the block storage back end: Prerequisites You have completed the adoption steps. Procedure Create a new file, for example glance_cinder.patch , and include the following content: Note The Block Storage service as a back end establishes a dependency with the Image service. Any deployed GlanceAPI instances do not work if the Image service is configured with the Block Storage service that is not available in the OpenStackControlPlane custom resource. After the Block Storage service, and in particular CinderVolume , is adopted, you can proceed with the GlanceAPI adoption. For more information, see Adopting the Block Storage service . Verify that CinderVolume is available: Patch the GlanceAPI service that is deployed in the control plane: 4.5.3. Adopting the Image service that is deployed with an NFS back end Important This content in this section is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview . Adopt the Image Service (glance) that you deployed with an NFS back end. To complete the following procedure, ensure that your environment meets the following criteria: The Storage network is propagated to the Red Hat OpenStack Platform (RHOSP) control plane. The Image service can reach the Storage network and connect to the nfs-server through the port 2049 . Prerequisites You have completed the adoption steps. In the source cloud, verify the NFS parameters that the overcloud uses to configure the Image service back end. Specifically, in yourdirector heat templates, find the following variables that override the default content that is provided by the glance-nfs.yaml file in the /usr/share/openstack-tripleo-heat-templates/environments/storage directory: Note In this example, the GlanceBackend variable shows that the Image service has no notion of an NFS back end. The variable is using the File driver and, in the background, the filesystem_store_datadir . The filesystem_store_datadir is mapped to the export value provided by the GlanceNfsShare variable instead of /var/lib/glance/images/ . If you do not export the GlanceNfsShare through a network that is propagated to the adopted Red Hat OpenStack Services on OpenShift (RHOSO) control plane, you must stop the nfs-server and remap the export to the storage network. Before doing so, ensure that the Image service is stopped in the source Controller nodes. In the control plane, the Image service is attached to the Storage network, then propagated through the associated NetworkAttachmentsDefinition custom resource (CR), and the resulting pods already have the right permissions to handle the Image service traffic through this network. In a deployed RHOSP control plane, you can verify that the network mapping matches with what has been deployed in the director-based environment by checking both the NodeNetworkConfigPolicy ( nncp ) and the NetworkAttachmentDefinition ( net-attach-def ). The following is an example of the output that you should check in the Red Hat OpenShift Container Platform (RHOCP) environment to make sure that there are no issues with the propagated networks: Procedure Adopt the Image service and create a new default GlanceAPI instance that is connected with the existing NFS share: Replace <ip_address> with the IP address that you use to reach the nfs-server . Replace <exported_path> with the exported path in the nfs-server . Patch the OpenStackControlPlane CR to deploy the Image service with an NFS back end: Verification When GlanceAPI is active, confirm that you can see a single API instance: Ensure that the description of the pod reports the following output: Check that the mountpoint that points to /var/lib/glance/images is mapped to the expected nfs server ip and nfs path that you defined in the new default GlanceAPI instance: Confirm that the UUID is created in the exported directory on the NFS node. For example: On the nfs-server node, the same uuid is in the exported /var/nfs : 4.5.4. Adopting the Image service that is deployed with a Red Hat Ceph Storage back end Adopt the Image Service (glance) that you deployed with a Red Hat Ceph Storage back end. Use the customServiceConfig parameter to inject the right configuration to the GlanceAPI instance. Prerequisites You have completed the adoption steps. Ensure that the Ceph-related secret ( ceph-conf-files ) is created in the openstack namespace and that the extraMounts property of the OpenStackControlPlane custom resource (CR) is configured properly. For more information, see Configuring a Ceph back end . Note If you backed up your Red Hat OpenStack Platform (RHOSP) services configuration file from the original environment, you can compare it with the confgiuration file that you adopted and ensure that the configuration is correct. For more information, see Pulling the configuration from a director deployment . This command produces the difference between both ini configuration files. Procedure Patch the OpenStackControlPlane CR to deploy the Image service with a Red Hat Ceph Storage back end: 4.5.5. Verifying the Image service adoption Verify that you adopted the Image Service (glance) to the Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 deployment. Procedure Test the Image service from the Red Hat OpenStack Platform CLI. You can compare and ensure that the configuration is applied to the Image service pods: If no line appears, then the configuration is correct. Inspect the resulting Image service pods: If you use a Red Hat Ceph Storage back end, ensure that the Red Hat Ceph Storage secrets are mounted: Check that the service is active, and that the endpoints are updated in the RHOSP CLI: Check that the images that you previously listed in the source cloud are available in the adopted service: 4.6. Adopting the Placement service To adopt the Placement service, you patch an existing OpenStackControlPlane custom resource (CR) that has the Placement service disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment. Prerequisites You import your databases to MariaDB instances on the control plane. For more information, see Migrating databases to MariaDB instances . You adopt the Identity service (keystone). For more information, see Adopting the Identity service . Procedure Patch the OpenStackControlPlane CR to deploy the Placement service: Verification Check that the Placement service endpoints are defined and pointing to the control plane FQDNs, and that the Placement API responds: 4.7. Adopting the Compute service To adopt the Compute service (nova), you patch an existing OpenStackControlPlane custom resource (CR) where the Compute service is disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment. The following procedure describes a single-cell setup. Prerequisites You have completed the adoption steps. You have defined the following shell variables. Replace the following example values with the values that are correct for your environment: Procedure Patch the OpenStackControlPlane CR to deploy the Compute service: Note This procedure assumes that Compute service metadata is deployed on the top level and not on each cell level. If the RHOSP deployment has a per-cell metadata deployment, adjust the following patch as needed. You cannot run the metadata service in cell0 . USD oc patch openstackcontrolplane openstack -n openstack --type=merge --patch ' spec: nova: enabled: true apiOverride: route: {} template: secret: osp-secret apiServiceTemplate: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true metadataServiceTemplate: enabled: true # deploy single nova metadata on the top level override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true schedulerServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true cellTemplates: cell0: conductorServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true cell1: metadataServiceTemplate: enabled: false # enable here to run it in a cell instead override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true conductorServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true ' If you are adopting the Compute service with the Bare Metal Provisioning service (ironic), append the following novaComputeTemplates in the cell1 section of the Compute service CR patch: cell1: novaComputeTemplates: standalone: customServiceConfig: | [DEFAULT] host = <hostname> [workarounds] disable_compute_service_check_for_ffu=true Replace <hostname> with the hostname of the node that is running the ironic Compute driver in the source cloud. Wait for the CRs for the Compute control plane services to be ready: Note The local Conductor services are started for each cell, while the superconductor runs in cell0 . Note that disable_compute_service_check_for_ffu is mandatory for all imported Compute services until the external data plane is imported, and until Compute services are fast-forward upgraded. For more information, see Adopting Compute services to the RHOSO data plane and Performing a fast-forward upgrade on Compute services . Verification Check that Compute service endpoints are defined and pointing to the control plane FQDNs, and that the Nova API responds: Compare the outputs with the topology-specific configuration in Retrieving topology-specific service configuration . Query the superconductor to check that cell1 exists, and compare it to pre-adoption values: The following changes are expected: The cell1 nova database and username become nova_cell1 . The default cell is renamed to cell1 . RabbitMQ transport URL no longer uses guest . Note At this point, the Compute service control plane services do not control the existing Compute service workloads. The control plane manages the data plane only after the data adoption process is completed. For more information, see Adopting Compute services to the RHOSO data plane . Important To import external Compute services to the RHOSO data plane, you must upgrade them first. For more information, see Adopting Compute services to the RHOSO data plane , and Performing a fast-forward upgrade on Compute services . 4.8. Adopting the Block Storage service To adopt a director-deployed Block Storage service (cinder), create the manifest based on the existing cinder.conf file, deploy the Block Storage service, and validate the new deployment. Prerequisites You have reviewed the Block Storage service limitations. For more information, see Limitations for adopting the Block Storage service . You have planned the placement of the Block Storage services. You have prepared the Red Hat OpenShift Container Platform (RHOCP) nodes where the volume and backup services run. For more information, see RHOCP preparation for Block Storage service adoption . The Block Storage service (cinder) is stopped. The service databases are imported into the control plane MariaDB. The Identity service (keystone) and Key Manager service (barbican) are adopted. The Storage network is correctly configured on the RHOCP cluster. The contents of cinder.conf file. Download the file so that you can access it locally: Procedure Create a new file, for example, cinder_api.patch , and apply the configuration: Replace <patch_name> with the name of your patch file. The following example shows a cinder_api.patch file: spec: extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph readOnly: true propagation: - CinderVolume - CinderBackup - Glance volumes: - name: ceph projected: sources: - secret: name: ceph-conf-files cinder: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: cinder secret: osp-secret cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 customServiceConfig: | [DEFAULT] default_volume_type=tripleo cinderScheduler: replicas: 0 cinderBackup: networkAttachments: - storage replicas: 0 cinderVolumes: ceph: networkAttachments: - storage replicas: 0 Retrieve the list of the scheduler and backup services: Remove services for hosts that are in the down state: Replace <service_binary> with the name of the binary, for example, cinder-backup . Replace <service_host> with the host name, for example, cinder-backup-0 . Deploy the scheduler, backup and volume services. Create another file, for example, cinder_services.patch , and apply the configuration. Replace <patch_name> with the name of your patch file. The following example shows a cinder_services.patch file for an RBD deployment: spec: cinder: enabled: true template: cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 1 customServiceConfig: | [DEFAULT] backup_driver=cinder.backup.drivers.ceph.CephBackupDriver backup_ceph_conf=/etc/ceph/ceph.conf backup_ceph_user=openstack backup_ceph_pool=backups cinderVolumes: ceph: networkAttachments: - storage replicas: 1 customServiceConfig: | [tripleo_ceph] backend_host=hostgroup volume_backend_name=tripleo_ceph volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=openstack rbd_pool=volumes rbd_flatten_volume_from_snapshot=False report_discard_supported=True Check if all the services are up and running. Apply the DB data migrations: Note You are not required to run the data migrations at this step, but you must run them before the upgrade. However, for adoption, you can run the migrations now to ensure that there are no issues before you run production workloads on the deployment. Verification Ensure that the openstack alias is defined: Confirm that Block Storage service endpoints are defined and pointing to the control plane FQDNs: Replace <endpoint> with the name of the endpoint that you want to confirm. Confirm that the Block Storage services are running: Note Cinder API services do not appear in the list. However, if you get a response from the openstack volume service list command, that means at least one of the cinder API services is running. Confirm that you have your volume types, volumes, snapshots, and backups: To confirm that the configuration is working, perform the following steps: Create a volume from an image to check that the connection to Image Service (glance) is working: Back up the attached volume: Replace <backup_name> with the name of your new backup location. Note You do not boot a Compute service (nova) instance by using the new volume from image or try to detach the volume because the Compute service and the Block Storage service are still not connected. 4.9. Adopting the Dashboard service To adopt the Dashboard service (horizon), you patch an existing OpenStackControlPlane custom resource (CR) that has the Dashboard service disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform environment. Prerequisites You adopted Memcached. For more information, see Deploying back-end services . You adopted the Identity service (keystone). For more information, see Adopting the Identity service . Procedure Patch the OpenStackControlPlane CR to deploy the Dashboard service: Verification Verify that the Dashboard service instance is successfully deployed and ready: Confirm that the Dashboard service is reachable and returns a 200 status code: 4.10. Adopting the Shared File Systems service The Shared File Systems service (manila) in Red Hat OpenStack Services on OpenShift (RHOSO) provides a self-service API to create and manage file shares. File shares (or "shares"), are built for concurrent read/write access from multiple clients. This makes the Shared File Systems service essential in cloud environments that require a ReadWriteMany persistent storage. File shares in RHOSO require network access. Ensure that the networking in the Red Hat OpenStack Platform (RHOSP) 17.1 environment matches the network plans for your new cloud after adoption. This ensures that tenant workloads remain connected to storage during the adoption process. The Shared File Systems service control plane services are not in the data path. Shutting down the API, scheduler, and share manager services do not impact access to existing shared file systems. Typically, storage and storage device management are separate networks. Shared File Systems services only need access to the storage device management network. For example, if you used a Red Hat Ceph Storage cluster in the deployment, the "storage" network refers to the Red Hat Ceph Storage cluster's public network, and the Shared File Systems service's share manager service needs to be able to reach it. The Shared File Systems service supports the following storage networking scenarios: You can directly control the networking for your respective file shares. The RHOSO administrator configures the storage networking. 4.10.1. Guidelines for preparing the Shared File Systems service configuration To deploy Shared File Systems service (manila) on the control plane, you must copy the original configuration file from the Red Hat OpenStack Platform 17.1 deployment. You must review the content in the file to make sure you are adopting the correct configuration for Red Hat OpenStack Services on OpenShift (RHOSO) 18.0. Not all of the content needs to be brought into the new cloud environment. Review the following guidelines for preparing your Shared File Systems service configuration file for adoption: The Shared File Systems service operator sets up the following configurations and can be ignored: Database-related configuration ( [database] ) Service authentication ( auth_strategy , [keystone_authtoken] ) Message bus configuration ( transport_url , control_exchange ) The default paste config ( api_paste_config ) Inter-service communication configuration ( [neutron] , [nova] , [cinder] , [glance] [oslo_messaging_*] ) Ignore the osapi_share_listen configuration. In Red Hat OpenStack Services on OpenShift (RHOSO) 18.0, you rely on Red Hat OpenShift Container Platform (RHOCP) routes and ingress. Check for policy overrides. In RHOSO 18.0, the Shared File Systems service ships with a secure default Role-based access control (RBAC), and overrides might not be necessary. If a custom policy is necessary, you must provide it as a ConfigMap . The following example spec illustrates how you can set up a ConfigMap called manila-policy with the contents of a file called policy.yaml : spec: manila: enabled: true template: manilaAPI: customServiceConfig: | [oslo_policy] policy_file=/etc/manila/policy.yaml extraMounts: - extraVol: - extraVolType: Undefined mounts: - mountPath: /etc/manila/ name: policy readOnly: true propagation: - ManilaAPI volumes: - name: policy projected: sources: - configMap: name: manila-policy items: - key: policy path: policy.yaml The value of the host option under the [DEFAULT] section must be hostgroup . To run the Shared File Systems service API service, you must add the enabled_share_protocols option to the customServiceConfig section in manila: template: manilaAPI . If you have scheduler overrides, add them to the customServiceConfig section in manila: template: manilaScheduler . If you have multiple storage back-end drivers configured with RHOSP 17.1, you need to split them up when deploying RHOSO 18.0. Each storage back-end driver needs to use its own instance of the manila-share service. If a storage back-end driver needs a custom container image, find it in the Red Hat Ecosystem Catalog , and create or modify an OpenStackVersion custom resource (CR) to specify the custom image using the same custom name . The following example shows a manila spec from the OpenStackControlPlane CR that includes multiple storage back-end drivers, where only one is using a custom container image: spec: manila: enabled: true template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols = nfs replicas: 3 manilaScheduler: replicas: 3 manilaShares: netapp: customServiceConfig: | [DEFAULT] debug = true enabled_share_backends = netapp host = hostgroup [netapp] driver_handles_share_servers = False share_backend_name = netapp share_driver = manila.share.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_transport_type = http replicas: 1 pure: customServiceConfig: | [DEFAULT] debug = true enabled_share_backends=pure-1 host = hostgroup [pure-1] driver_handles_share_servers = False share_backend_name = pure-1 share_driver = manila.share.drivers.purestorage.flashblade.FlashBladeShareDriver flashblade_mgmt_vip = 203.0.113.15 flashblade_data_vip = 203.0.10.14 replicas: 1 The following example shows the OpenStackVersion CR that defines the custom container image: apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack spec: customContainerImages: cinderVolumeImages: pure: registry.connect.redhat.com/purestorage/openstack-manila-share-pure-rhosp-18-0 The name of the OpenStackVersion CR must match the name of your OpenStackControlPlane CR. If you are providing sensitive information, such as passwords, hostnames, and usernames, use RHOCP secrets, and the customServiceConfigSecrets key. You can use customConfigSecrets in any service. If you use third party storage that requires credentials, create a secret that is referenced in the manila CR/patch file by using the customServiceConfigSecrets key. For example: Create a file that includes the secrets, for example, netapp_secrets.conf : USD cat << __EOF__ > ~/netapp_secrets.conf [netapp] netapp_server_hostname = 203.0.113.10 netapp_login = fancy_netapp_user netapp_password = secret_netapp_password netapp_vserver = mydatavserver __EOF__ Replace <secret> with the name of the file that includes your secrets, for example, netapp_secrets.conf . Add the secret to any Shared File Systems service file in the customServiceConfigSecrets section. The following example adds the osp-secret-manila-netapp secret to the manilaShares service: spec: manila: enabled: true template: < . . . > manilaShares: netapp: customServiceConfig: | [DEFAULT] debug = true enabled_share_backends = netapp host = hostgroup [netapp] driver_handles_share_servers = False share_backend_name = netapp share_driver = manila.share.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_transport_type = http customServiceConfigSecrets: - osp-secret-manila-netapp replicas: 1 < . . . > 4.10.2. Deploying the Shared File Systems service on the control plane Copy the Shared File Systems service (manila) configuration from the Red Hat OpenStack Platform (RHOSP) 17.1 deployment, and then deploy the Shared File Systems service on the control plane. Prerequisites The Shared File Systems service systemd services such as api , cron , and scheduler are stopped. For more information, see Stopping Red Hat OpenStack Platform services . If the deployment uses CephFS through NFS as a storage back end, the Pacemaker ordering and collocation constraints are adjusted. For more information, see Stopping Red Hat OpenStack Platform services . The Shared File Systems service Pacemaker service ( openstack-manila-share ) is stopped. For more information, see Stopping Red Hat OpenStack Platform services . The database migration is complete. For more information, see Migrating databases to MariaDB instances . The Red Hat OpenShift Container Platform (RHOCP) nodes where the manila-share service is to be deployed can reach the management network that the storage system is in. If the deployment uses CephFS through NFS as a storage back end, a new clustered Ceph NFS service is deployed on the Red Hat Ceph Storage cluster with the help of Ceph orchestrator. For more information, see Creating a Ceph NFS cluster . Services such as the Identity service (keystone) and memcached are available prior to adopting the Shared File Systems services. If you enabled tenant-driven networking by setting driver_handles_share_servers=True , the Networking service (neutron) is deployed. The CONTROLLER1_SSH environment variable is defined and points to the RHOSP Controller node. Replace the following example values with values that are correct for your environment: Procedure Copy the configuration file from RHOSP 17.1 for reference: Review the configuration file for configuration changes that were made since RHOSP 17.1. For more information on preparing this file for Red Hat OpenStack Services on OpenShift (RHOSO), see Guidelines for preparing the Shared File Systems service configuration . Create a patch file for the OpenStackControlPlane CR to deploy the Shared File Systems service. The following example manila.patch file uses native CephFS: USD cat << __EOF__ > ~/manila.patch spec: manila: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: manila secret: osp-secret manilaAPI: replicas: 3 1 customServiceConfig: | [DEFAULT] enabled_share_protocols = cephfs override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer manilaScheduler: replicas: 3 2 manilaShares: cephfs: replicas: 1 3 customServiceConfig: | [DEFAULT] enabled_share_backends = tripleo_ceph host = hostgroup [cephfs] driver_handles_share_servers=False share_backend_name=cephfs 4 share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=CEPHFS networkAttachments: 5 - storage extraMounts: 6 - name: v1 region: r1 extraVol: - propagation: - ManilaShare extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true __EOF__ 1 Set the replica count of the manilaAPI service to 3. 2 Set the replica count of the manilaScheduler service to 3. 3 Set the replica count of the manilaShares service to 1. 4 Ensure that the names of the back ends ( share_backend_name ) are the same as they were in RHOSP 17.1. 5 Ensure that the appropriate storage management network is specified in the networkAttachments section. For example, the manilaShares instance with the CephFS back-end driver is connected to the storage network. 6 If you need to add extra files to any of the services, you can use extraMounts . For example, when using Red Hat Ceph Storage, you can add the Shared File Systems service Ceph user's keyring file as well as the ceph.conf configuration file. Patch the OpenStackControlPlane CR: Replace <manila.patch> with the name of your patch file. Verification Inspect the resulting Shared File Systems service pods: Check that the Shared File Systems API service is registered in the Identity service (keystone): Test the health of the service: Check existing workloads: 4.10.3. Decommissioning the Red Hat OpenStack Platform standalone Ceph NFS service Important This content in this section is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview . If your deployment uses CephFS through NFS, you must decommission the Red Hat OpenStack Platform(RHOSP) standalone NFS service. Since future software upgrades do not support the NFS service, ensure that the decommissioning period is short. Prerequisites You identified the new export locations for your existing shares by querying the Shared File Systems API. You unmounted and remounted the shared file systems on each client to stop using the NFS server. If you are consuming the Shared File Systems service shares with the Shared File Systems service CSI plugin for Red Hat OpenShift Container Platform (RHOCP), you migrated the shares by scaling down the application pods and scaling them back up. Note Clients that are creating new workloads cannot use share exports through the NFS service. The Shared File Systems service no longer communicates with the NFS service, and cannot apply or alter export rules on the NFS service. Procedure Remove the cephfs_ganesha_server_ip option from the manila-share service configuration: Note This restarts the manila-share process and removes the export locations that applied to the NFS service from all the shares. USD cat << __EOF__ > ~/manila.patch spec: manila: enabled: true apiOverride: route: {} template: manilaShares: cephfs: replicas: 1 customServiceConfig: | [DEFAULT] enabled_share_backends = cephfs host = hostgroup [cephfs] driver_handles_share_servers=False share_backend_name=cephfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_protocol_helper_type=NFS cephfs_nfs_cluster_id=cephfs networkAttachments: - storage __EOF__ Patch the OpenStackControlPlane custom resource: Replace <manila.patch> with the name of your patch file. Clean up the standalone ceph-nfs service from the RHOSP control plane nodes by disabling and deleting the Pacemaker resources associated with the service: Important You can defer this step until after RHOSO 18.0 is operational. During this time, you cannot decommission the Controller nodes. Replace <VIP> with the IP address assigned to the ceph-nfs service in your environment. 4.11. Adopting the Bare Metal Provisioning service Important This content in this section is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview . Review information about your Bare Metal Provisioning service (ironic) configuration and then adopt the Bare Metal Provisioning service to the Red Hat OpenStack Services on OpenShift control plane. 4.11.1. Bare Metal Provisioning service configurations Important This content in this section is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview . You configure the Bare Metal Provisioning service (ironic) by using configuration snippets. For more information about configuring the control plane with the Bare Metal Provisioning service, see Customizing the Red Hat OpenStack Services on OpenShift deployment . Some Bare Metal Provisioning service configuration is overridden in director, for example, PXE Loader file names are often overridden at intermediate layers. You must pay attention to the settings you apply in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. The ironic-operator applies a reasonable working default configuration, but if you override them with your prior configuration, your experience might not be ideal or your new Bare Metal Provisioning service fails to operate. Similarly, additional configuration might be necessary, for example, if you enable and use additional hardware types in your ironic.conf file. The model of reasonable defaults includes commonly used hardware-types and driver interfaces. For example, the redfish-virtual-media boot interface and the ramdisk deploy interface are enabled by default. If you add new bare metal nodes after the adoption is complete, the driver interface selection occurs based on the order of precedence in the configuration if you do not explicitly set it on the node creation request or as an established default in the ironic.conf file. Some configuration parameters do not need to be set on an individual node level, for example, network UUID values, or they are centrally configured in the ironic.conf file, as the setting controls security behavior. It is critical that you maintain the following parameters that you configured and formatted as [section] and parameter name from the prior deployment to the new deployment. These parameters that govern the underlying behavior and values in the configuration would have used specific values if set. [neutron]cleaning_network [neutron]provisioning_network [neutron]rescuing_network [neutron]inspection_network [conductor]automated_clean [deploy]erase_devices_priority [deploy]erase_devices_metadata_priority [conductor]force_power_state_during_sync You can set the following parameters individually on a node. However, you might choose to use embedded configuration options to avoid the need to set the parameters individually when creating or managing bare metal nodes. Check your prior ironic.conf file for these parameters, and if set, apply a specific override configuration. [conductor]bootloader [conductor]rescue_ramdisk [conductor]rescue_kernel [conductor]deploy_kernel [conductor]deploy_ramdisk The instances of kernel_append_params , formerly pxe_append_params in the [pxe] and [redfish] configuration sections, are used to apply boot time options like "console" for the deployment ramdisk and as such often must be changed. Warning You cannot migrate hardware types that are set with the ironic.conf file enabled_hardware_types parameter, and hardware type driver interfaces starting with staging- into the adopted configuration. 4.11.2. Deploying the Bare Metal Provisioning service Important This content in this section is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information, see Technology Preview . To deploy the Bare Metal Provisioning service (ironic), you patch an existing OpenStackControlPlane custom resource (CR) that has the Bare Metal Provisioning service disabled. The ironic-operator applies the configuration and starts the Bare Metal Provisioning services. After the services are running, the Bare Metal Provisioning service automatically begins polling the power state of the bare metal nodes that it manages. Note By default, newer versions of the Bare Metal Provisioning service contain a more restrictive access control model while also becoming multi-tenant aware. As a result, bare metal nodes might be missing from a openstack baremetal node list command after you adopt the Bare Metal Provisioning service. Your nodes are not deleted. You must set the owner field on each bare metal node due to the increased access restrictions in the role-based access control (RBAC) model. Because this involves access controls and the model of use can be site specific, you should identify which project owns the bare metal nodes. Prerequisites You have imported the service databases into the control plane MariaDB. The Identity service (keystone), Networking service (neutron), Image Service (glance), and Block Storage service (cinder) are operational. Note If you use the Bare Metal Provisioning service in a Bare Metal as a Service configuration, you have not yet adopted the Compute service (nova). For the Bare Metal Provisioning service conductor services, the services must be able to reach Baseboard Management Controllers of hardware that is configured to be managed by the Bare Metal Provisioning service. If this hardware is unreachable, the nodes might enter "maintenance" state and be unavailable until connectivity is restored later. You have downloaded the ironic.conf file locally: Note This configuration file must come from one of the Controller nodes and not a director undercloud node. The director undercloud node operates with different configuration that does not apply when you adopt the Overcloud Ironic deployment. If you are adopting the Ironic Inspector service, you need the value of the IronicInspectorSubnets director parameter. Use the same values to populate the dhcpRanges parameter in the RHOSO environment. You have defined the following shell variables. Replace the following example values with values that apply to your environment: Procedure Patch the OpenStackControlPlane custom resource (CR) to deploy the Bare Metal Provisioning service: USD oc patch openstackcontrolplane openstack -n openstack --type=merge --patch ' spec: ironic: enabled: true template: rpcTransport: oslo databaseInstance: openstack ironicAPI: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer ironicConductors: - replicas: 1 networkAttachments: - baremetal provisionNetwork: baremetal storageRequest: 10G customServiceConfig: | [neutron] cleaning_network=<cleaning network uuid> provisioning_network=<provisioning network uuid> rescuing_network=<rescuing network uuid> inspection_network=<introspection network uuid> [conductor] automated_clean=true ironicInspector: replicas: 1 inspectionNetwork: baremetal networkAttachments: - baremetal dhcpRanges: - name: inspector-0 cidr: 172.20.1.0/24 start: 172.20.1.190 end: 172.20.1.199 gateway: 172.20.1.1 serviceUser: ironic-inspector databaseAccount: ironic-inspector passwordSelectors: database: IronicInspectorDatabasePassword service: IronicInspectorPassword ironicNeutronAgent: replicas: 1 rabbitMqClusterName: rabbitmq secret: osp-secret ' Wait for the Bare Metal Provisioning service control plane services CRs to become ready: Verify that the individual services are ready: Update the DNS Nameservers on the provisoning, cleaning, and rescue networks: Note For name resolution to work for Bare Metal Provisioning service operations, you must set the DNS nameserver to use the internal DNS servers in the RHOSO control plane: Verify that no Bare Metal Provisioning service nodes are missing from the node list: Important If the openstack baremetal node list command output reports an incorrect power status, wait a few minutes and re-run the command to see if the output syncs with the actual state of the hardware being managed. The time required for the Bare Metal Provisioning service to review and reconcile the power state of bare metal nodes depends on the number of operating conductors through the replicas parameter and which are present in the Bare Metal Provisioning service deployment being adopted. If any Bare Metal Provisioning service nodes are missing from the openstack baremetal node list command, temporarily disable the new RBAC policy to see the nodes again: USD oc patch openstackcontrolplane openstack -n openstack --type=merge --patch ' spec: ironic: enabled: true template: databaseInstance: openstack ironicAPI: replicas: 1 customServiceConfig: | [oslo_policy] enforce_scope=false enforce_new_defaults=false ' After you set the owner field on the bare metal nodes, you can re-enable RBAC by removing the customServiceConfig section or by setting the following values to true : After this configuration is applied, the operator restarts the Ironic API service and disables the new RBAC policy that is enabled by default. After the RBAC policy is disabled, you can view bare metal nodes without an owner field: Assign all bare metal nodes with no owner to a new project, for example, the admin project: Re-apply the default RBAC: USD oc patch openstackcontrolplane openstack -n openstack --type=merge --patch ' spec: ironic: enabled: true template: databaseInstance: openstack ironicAPI: replicas: 1 customServiceConfig: | [oslo_policy] enforce_scope=true enforce_new_defaults=true ' Verification Verify the list of endpoints: Verify the list of bare metal nodes: 4.12. Adopting the Orchestration service To adopt the Orchestration service (heat), you patch an existing OpenStackControlPlane custom resource (CR), where the Orchestration service is disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment. After you complete the adoption process, you have CRs for Heat , HeatAPI , HeatEngine , and HeatCFNAPI , and endpoints within the Identity service (keystone) to facilitate these services. Prerequisites The source director environment is running. The target Red Hat OpenShift Container Platform (RHOCP) environment is running. You adopted MariaDB and the Identity service. If your existing Orchestration service stacks contain resources from other services such as Networking service (neutron), Compute service (nova), Object Storage service (swift), and so on, adopt those sevices before adopting the Orchestration service. Procedure Retrieve the existing auth_encryption_key and service passwords. You use these passwords to patch the osp-secret . In the following example, the auth_encryption_key is used as HeatAuthEncryptionKey and the service password is used as HeatPassword : Log in to a Controller node and verify the auth_encryption_key value in use: Encode the password to Base64 format: Patch the osp-secret to update the HeatAuthEncryptionKey and HeatPassword parameters. These values must match the values in the director Orchestration service configuration: Patch the OpenStackControlPlane CR to deploy the Orchestration service: Verification Ensure that the statuses of all the CRs are Setup complete : Check that the Orchestration service is registered in the Identity service: Check that the Orchestration service engine services are running: Verify that you can see your Orchestration service stacks: 4.13. Adopting Telemetry services To adopt Telemetry services, you patch an existing OpenStackControlPlane custom resource (CR) that has Telemetry services disabled to start the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) 17.1 environment. If you adopt Telemetry services, the observability solution that is used in the RHOSP 17.1 environment, Service Telemetry Framework, is removed from the cluster. The new solution is deployed in the Red Hat OpenStack Services on OpenShift (RHOSO) environment, allowing for metrics, and optionally logs, to be retrieved and stored in the new back ends. You cannot automatically migrate old data because different back ends are used. Metrics and logs are considered short-lived data and are not intended to be migrated to the RHOSO environment. For information about adopting legacy autoscaling stack templates to the RHOSO environment, see Adopting Autoscaling services . Prerequisites The director environment is running (the source cloud). The Single Node OpenShift or OpenShift Local is running in the Red Hat OpenShift Container Platform (RHOCP) cluster. adoption steps are completed. Procedure Patch the OpenStackControlPlane CR to deploy cluster-observability-operator : Wait for the installation to succeed: Patch the OpenStackControlPlane CR to deploy Ceilometer services: Enable the metrics storage back end: Verification Verify that the alertmanager and prometheus pods are available: Inspect the resulting Ceilometer pods: Inspect enabled pollsters: Optional: Override default pollsters according to the requirements of your environment: steps Optional: Patch the OpenStackControlPlane CR to include logging : 4.14. Adopting autoscaling services To adopt services that enable autoscaling, you patch an existing OpenStackControlPlane custom resource (CR) where the Alarming services (aodh) are disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform environment. Prerequisites The source director environment is running. A Single Node OpenShift or OpenShift Local is running in the Red Hat OpenShift Container Platform (RHOCP) cluster. You have adopted the following services: MariaDB Identity service (keystone) Orchestration service (heat) Telemetry service Procedure Patch the OpenStackControlPlane CR to deploy the autoscaling services: Inspect the aodh pods: Check whether the aodh API service is registered in the Identity service: Optional: Create aodh alarms with the PrometheusAlarm alarm type: Note You must use the PrometheusAlarm alarm type instead of GnocchiAggregationByResourcesAlarm . Verify that the alarm is enabled: 4.15. Pulling the configuration from a director deployment Before you start the data plane adoption workflow, back up the configuration from the Red Hat OpenStack Platform (RHOSP) services and director. You can then use the files during the configuration of the adopted services to ensure that nothing is missed or misconfigured. Prerequisites The os-diff tool is installed and configured. For more information, see Comparing configuration files between deployments . Procedure Update your ssh parameters according to your environment in the os-diff.cfg . Os-diff uses the ssh parameters to connect to your director node, and then query and download the configuration files: Ensure that the ssh command you provide in ssh_cmd parameter is correct and includes key authentication. Enable the services that you want to include in the /etc/os-diff/config.yaml file, and disable the services that you want to exclude from the file. Ensure that you have the correct permissions to edit the file: The following example enables the default Identity service (keystone) to be included in the /etc/os-diff/config.yaml file: # service name and file location services: # Service name keystone: # Bool to enable/disable a service (not implemented yet) enable: true # Pod name, in both OCP and podman context. # It could be strict match or will only just grep the podman_name # and work with all the pods which matched with pod_name. # To enable/disable use strict_pod_name_match: true/false podman_name: keystone pod_name: keystone container_name: keystone-api # pod options # strict match for getting pod id in TripleO and podman context strict_pod_name_match: false # Path of the config files you want to analyze. # It could be whatever path you want: # /etc/<service_name> or /etc or /usr/share/<something> or even / # @TODO: need to implement loop over path to support multiple paths such as: # - /etc # - /usr/share path: - /etc/ - /etc/keystone - /etc/keystone/keystone.conf - /etc/keystone/logging.conf Repeat this step for each RHOSP service that you want to disable or enable. If you use non-containerized services, such as the ovs-external-ids , pull the configuration or the command output. For example: Note You must correctly configure an SSH configuration file or equivalent for non-standard services, such as OVS. The ovs_external_ids service does not run in a container, and the OVS data is stored on each host of your cloud, for example, controller_1/controller_2/ , and so on. 1 The list of hosts, for example, compute-1 , compute-2 . 2 The command that runs against the hosts. 3 Os-diff gets the output of the command and stores the output in a file that is specified by the key path. 4 Provides a mapping between, in this example, the data plane custom resource definition and the ovs-vsctl output. 5 The edpm_ovn_bridge_mappings variable must be a list of strings, for example, ["datacentre:br-ex"] . Compare the values: For example, to check the /etc/yum.conf on every host, you must put the following statement in the config.yaml file. The following example uses a file called yum_config : Pull the configuration: Note The following command pulls all the configuration files that are included in the /etc/os-diff/config.yaml file. You can configure os-diff to update this file automatically according to your running environment by using the --update or --update-only option. These options set the podman information into the config.yaml for all running containers. The podman information can be useful later, when all the Red Hat OpenStack Platform services are turned off. Note that when the config.yaml file is populated automatically you must provide the configuration paths manually for each service. The configuration is pulled and stored by default in the following directory: Verification Verify that you have a directory for each service configuration in your local path: 4.16. Rolling back the control plane adoption If you encountered a problem and are unable to complete the adoption of the Red Hat OpenStack Platform (RHOSP) control plane services, you can roll back the control plane adoption. Important Do not attempt the rollback if you altered the data plane nodes in any way. You can only roll back the control plane adoption if you altered the control plane. During the control plane adoption, services on the RHOSP control plane are stopped but not removed. The databases on the RHOSP control plane are not edited during the adoption procedure. The Red Hat OpenStack Services on OpenShift (RHOSO) control plane receives a copy of the original control plane databases. The rollback procedure assumes that the data plane has not yet been modified by the adoption procedure, and it is still connected to the RHOSP control plane. The rollback procedure consists of the following steps: Restoring the functionality of the RHOSP control plane. Removing the partially or fully deployed RHOSO control plane. Procedure To restore the source cloud to a working state, start the RHOSP control plane services that you previously stopped during the adoption procedure: If the Ceph NFS service is running on the deployment as a Shared File Systems service (manila) back end, you must restore the Pacemaker order and colocation constraints for the openstack-manila-share service: Verify that the source cloud is operational again, for example, you can run openstack CLI commands such as openstack server list , or check that you can access the Dashboard service (horizon). Remove the partially or fully deployed control plane so that you can attempt the adoption again later: Note After you restore the RHOSP control plane services, their internal state might have changed. Before you retry the adoption procedure, verify that all the control plane resources are removed and that there are no leftovers which could affect the following adoption procedure attempt. You must not use previously created copies of the database contents in another adoption attempt. You must make a new copy of the latest state of the original source database contents. For more information about making new copies of the database, see Migrating databases to the control plane .
[ "oc apply -f - <<EOF apiVersion: v1 data: CredentialKeys0: USD(USDCONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/credential-keys/0 | base64 -w 0) CredentialKeys1: USD(USDCONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/credential-keys/1 | base64 -w 0) FernetKeys0: USD(USDCONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys/0 | base64 -w 0) FernetKeys1: USD(USDCONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys/1 | base64 -w 0) kind: Secret metadata: name: keystone namespace: openstack type: Opaque EOF", "oc patch openstackcontrolplane openstack --type=merge --patch ' spec: keystone: enabled: true apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret '", "alias openstack=\"oc exec -t openstackclient -- openstack\"", "openstack endpoint list | grep keystone | awk '/admin/{ print USD2; }' | xargs USD{BASH_ALIASES[openstack]} endpoint delete || true for service in aodh heat heat-cfn barbican cinderv3 glance gnocchi manila manilav2 neutron nova placement swift ironic-inspector ironic; do openstack service list | awk \"/ USDservice /{ print \\USD2; }\" | xargs -r USD{BASH_ALIASES[openstack]} service delete || true done", "openstack endpoint list | grep keystone", "oc set data secret/osp-secret \"BarbicanSimpleCryptoKEK=USD(USDCONTROLLER1_SSH \"python3 -c \\\"import configparser; c = configparser.ConfigParser(); c.read('/var/lib/config-data/puppet-generated/barbican/etc/barbican/barbican.conf'); print(c['simple_crypto_plugin']['kek'])\\\"\")\"", "oc patch openstackcontrolplane openstack --type=merge --patch ' spec: barbican: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: barbican rabbitMqClusterName: rabbitmq secret: osp-secret simpleCryptoBackendSecret: osp-secret serviceAccount: barbican serviceUser: barbican passwordSelectors: service: BarbicanPassword simplecryptokek: BarbicanSimpleCryptoKEK barbicanAPI: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer barbicanWorker: replicas: 1 barbicanKeystoneListener: replicas: 1 '", "openstack endpoint list | grep key-manager", "openstack service list | grep key-manager", "openstack endpoint list | grep key-manager", "openstack secret list", "oc patch openstackcontrolplane openstack --type=merge --patch ' spec: neutron: enabled: true apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer databaseInstance: openstack databaseAccount: neutron secret: osp-secret networkAttachments: - internalapi '", "oc patch openstackcontrolplane openstack --type=merge --patch ' spec: neutron: template: customServiceConfig: | [DEFAULT] dhcp_agent_notification = True '", "NEUTRON_API_POD=`oc get pods -l service=neutron | tail -n 1 | cut -f 1 -d' '` exec -t USDNEUTRON_API_POD -c neutron-api -- cat /etc/neutron/neutron.conf", "openstack service list | grep network", "openstack endpoint list | grep network | 6a805bd6c9f54658ad2f24e5a0ae0ab6 | regionOne | neutron | network | True | public | http://neutron-public-openstack.apps-crc.testing | | b943243e596847a9a317c8ce1800fa98 | regionOne | neutron | network | True | internal | http://neutron-internal.openstack.svc:9696 |", "openstack network create net openstack subnet create --network net --subnet-range 10.0.0.0/24 subnet openstack router create router", "oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: swift-conf namespace: openstack type: Opaque data: swift.conf: USD(USDCONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/swift/etc/swift/swift.conf | base64 -w0) EOF", "oc apply -f - <<EOF apiVersion: v1 kind: ConfigMap metadata: name: swift-ring-files binaryData: swiftrings.tar.gz: USD(USDCONTROLLER1_SSH \"cd /var/lib/config-data/puppet-generated/swift/etc/swift && tar cz *.builder *.ring.gz backups/ | base64 -w0\") account.ring.gz: USD(USDCONTROLLER1_SSH \"base64 -w0 /var/lib/config-data/puppet-generated/swift/etc/swift/account.ring.gz\") container.ring.gz: USD(USDCONTROLLER1_SSH \"base64 -w0 /var/lib/config-data/puppet-generated/swift/etc/swift/container.ring.gz\") object.ring.gz: USD(USDCONTROLLER1_SSH \"base64 -w0 /var/lib/config-data/puppet-generated/swift/etc/swift/object.ring.gz\") EOF", "oc patch openstackcontrolplane openstack --type=merge --patch ' spec: swift: enabled: true template: memcachedInstance: memcached swiftRing: ringReplicas: 1 swiftStorage: replicas: 0 networkAttachments: - storage storageClass: local-storage 1 storageRequest: 10Gi swiftProxy: secret: osp-secret replicas: 1 passwordSelectors: service: SwiftPassword serviceUser: swift override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: 2 - storage '", "oc get pods -l component=swift-proxy", "openstack service list | grep swift | b5b9b1d3c79241aa867fa2d05f2bbd52 | swift | object-store |", "openstack endpoint list | grep swift | 32ee4bd555414ab48f2dc90a19e1bcd5 | regionOne | swift | object-store | True | public | https://swift-public-openstack.apps-crc.testing/v1/AUTH_%(tenant_id)s | | db4b8547d3ae4e7999154b203c6a5bed | regionOne | swift | object-store | True | internal | http://swift-internal.openstack.svc:8080/v1/AUTH_%(tenant_id)s |", "openstack container create test +---------------------------------------+-----------+------------------------------------+ | account | container | x-trans-id | +---------------------------------------+-----------+------------------------------------+ | AUTH_4d9be0a9193e4577820d187acdd2714a | test | txe5f9a10ce21e4cddad473-0065ce41b9 | +---------------------------------------+-----------+------------------------------------+ openstack object create test --name obj <(echo \"Hello World!\") +--------+-----------+----------------------------------+ | object | container | etag | +--------+-----------+----------------------------------+ | obj | test | d41d8cd98f00b204e9800998ecf8427e | +--------+-----------+----------------------------------+ openstack object save test obj --file - Hello World!", ".. spec glance: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift [glance_store] default_backend = default_backend [default_backend] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_endpoint_type = internalURL swift_store_user = service:glance swift_store_key = {{ .ServicePassword }}", "spec: glance: enabled: true apiOverride: route: {} template: secret: osp-secret databaseInstance: openstack storage: storageRequest: 10G customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift [glance_store] default_backend = default_backend [default_backend] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_endpoint_type = internalURL swift_store_user = service:glance swift_store_key = {{ .ServicePassword }} glanceAPIs: default: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage", "oc get pod -l component=swift-proxy | grep Running swift-proxy-75cb47f65-92rxq 3/3 Running 0", "oc patch openstackcontrolplane openstack --type=merge --patch-file=glance_swift.patch", ".. spec glance: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:cinder [glance_store] default_backend = default_backend [default_backend] rootwrap_config = /etc/glance/rootwrap.conf description = Default cinder backend cinder_store_auth_address = {{ .KeystoneInternalURL }} cinder_store_user_name = {{ .ServiceUser }} cinder_store_password = {{ .ServicePassword }} cinder_store_project_name = service cinder_catalog_info = volumev3::internalURL cinder_use_multipath = true", "spec: glance: enabled: true apiOverride: route: {} template: secret: osp-secret databaseInstance: openstack storage: storageRequest: 10G customServiceConfig: | [DEFAULT] enabled_backends = default_backend:cinder [glance_store] default_backend = default_backend [default_backend] rootwrap_config = /etc/glance/rootwrap.conf description = Default cinder backend cinder_store_auth_address = {{ .KeystoneInternalURL }} cinder_store_user_name = {{ .ServiceUser }} cinder_store_password = {{ .ServicePassword }} cinder_store_project_name = service cinder_catalog_info = volumev3::internalURL cinder_use_multipath = true glanceAPIs: default: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage", "oc get pod -l component=cinder-volume | grep Running cinder-volume-75cb47f65-92rxq 3/3 Running 0", "oc patch openstackcontrolplane openstack --type=merge --patch-file=glance_cinder.patch", "GlanceBackend: file GlanceNfsEnabled: true GlanceNfsShare: 192.168.24.1:/var/nfs", "oc get nncp NAME STATUS REASON enp6s0-crc-8cf2w-master-0 Available SuccessfullyConfigured oc get net-attach-def NAME ctlplane internalapi storage tenant oc get ipaddresspool -n metallb-system NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES ctlplane true false [\"192.168.122.80-192.168.122.90\"] internalapi true false [\"172.17.0.80-172.17.0.90\"] storage true false [\"172.18.0.80-172.18.0.90\"] tenant true false [\"172.19.0.80-172.19.0.90\"]", "cat << EOF > glance_nfs_patch.yaml spec: extraMounts: - extraVol: - extraVolType: Nfs mounts: - mountPath: /var/lib/glance/images name: nfs propagation: - Glance volumes: - name: nfs nfs: path: <exported_path> server: <ip_address> name: r1 region: r1 glance: enabled: true template: databaseInstance: openstack customServiceConfig: | [DEFAULT] enabled_backends = default_backend:file [glance_store] default_backend = default_backend [default_backend] filesystem_store_datadir = /var/lib/glance/images/ storage: storageRequest: 10G glanceAPIs: default: replicas: 3 type: single override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage EOF", "oc patch openstackcontrolplane openstack --type=merge --patch-file glance_nfs_patch.yaml", "oc get pods -l service=glance NAME READY STATUS RESTARTS glance-default-single-0 3/3 Running 0 ```", "Mounts: nfs: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: {{ server ip address }} Path: {{ nfs export path }} ReadOnly: false", "oc rsh -c glance-api glance-default-single-0 sh-5.1# mount {{ ip address }}:/var/nfs on /var/lib/glance/images type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.18.0.5,local_lock=none,addr=172.18.0.5)", "oc rsh openstackclient openstack image list sh-5.1USD curl -L -o /tmp/cirros-0.5.2-x86_64-disk.img http://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.img sh-5.1USD openstack image create --container-format bare --disk-format raw --file /tmp/cirros-0.5.2-x86_64-disk.img cirros sh-5.1USD openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 634482ca-4002-4a6d-b1d5-64502ad02630 | cirros | active | +--------------------------------------+--------+--------+", "ls /var/nfs/ 634482ca-4002-4a6d-b1d5-64502ad02630", "cat << EOF > glance_patch.yaml spec: glance: enabled: true template: databaseInstance: openstack customServiceConfig: | [DEFAULT] enabled_backends=default_backend:rbd [glance_store] default_backend=default_backend [default_backend] rbd_store_ceph_conf=/etc/ceph/ceph.conf rbd_store_user=openstack rbd_store_pool=images store_description=Ceph glance store backend. storage: storageRequest: 10G glanceAPIs: default: replicas: 0 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer networkAttachments: - storage EOF", "os-diff diff /tmp/collect_tripleo_configs/glance/etc/glance/glance-api.conf glance_patch.yaml --crd", "oc patch openstackcontrolplane openstack --type=merge --patch-file glance_patch.yaml", "os-diff diff /etc/glance/glance.conf.d/02-config.conf glance_patch.yaml --frompod -p glance-api", "GLANCE_POD=`oc get pod |grep glance-default | cut -f 1 -d' ' | head -n 1` exec -t USDGLANCE_POD -c glance-api -- cat /etc/glance/glance.conf.d/02-config.conf [DEFAULT] enabled_backends=default_backend:rbd [glance_store] default_backend=default_backend [default_backend] rbd_store_ceph_conf=/etc/ceph/ceph.conf rbd_store_user=openstack rbd_store_pool=images store_description=Ceph glance store backend.", "oc exec -t USDGLANCE_POD -c glance-api -- ls /etc/ceph ceph.client.openstack.keyring ceph.conf", "oc rsh openstackclient openstack service list | grep image | fc52dbffef36434d906eeb99adfc6186 | glance | image | openstack endpoint list | grep image | 569ed81064f84d4a91e0d2d807e4c1f1 | regionOne | glance | image | True | internal | http://glance-internal-openstack.apps-crc.testing | | 5843fae70cba4e73b29d4aff3e8b616c | regionOne | glance | image | True | public | http://glance-public-openstack.apps-crc.testing |", "openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | c3158cad-d50b-452f-bec1-f250562f5c1f | cirros | active | +--------------------------------------+--------+--------+", "oc patch openstackcontrolplane openstack --type=merge --patch ' spec: placement: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: placement secret: osp-secret override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer '", "alias openstack=\"oc exec -t openstackclient -- openstack\" openstack endpoint list | grep placement Without OpenStack CLI placement plugin installed: PLACEMENT_PUBLIC_URL=USD(openstack endpoint list -c 'Service Name' -c 'Service Type' -c URL | grep placement | grep public | awk '{ print USD6; }') exec -t openstackclient -- curl \"USDPLACEMENT_PUBLIC_URL\" With OpenStack CLI placement plugin installed: openstack resource class list", "alias openstack=\"oc exec -t openstackclient -- openstack\"", "oc patch openstackcontrolplane openstack -n openstack --type=merge --patch ' spec: nova: enabled: true apiOverride: route: {} template: secret: osp-secret apiServiceTemplate: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true metadataServiceTemplate: enabled: true # deploy single nova metadata on the top level override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true schedulerServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true cellTemplates: cell0: conductorServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true cell1: metadataServiceTemplate: enabled: false # enable here to run it in a cell instead override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true conductorServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true '", "cell1: novaComputeTemplates: standalone: customServiceConfig: | [DEFAULT] host = <hostname> [workarounds] disable_compute_service_check_for_ffu=true", "oc wait --for condition=Ready --timeout=300s Nova/nova", "openstack endpoint list | grep nova openstack server list", ". ~/.source_cloud_exported_variables echo USDPULL_OPENSTACK_CONFIGURATION_NOVAMANAGE_CELL_MAPPINGS rsh nova-cell0-conductor-0 nova-manage cell_v2 list_cells | grep -F '| cell1 |'", "USDCONTROLLER1_SSH cat /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf > cinder.conf", "oc patch openstackcontrolplane openstack --type=merge --patch-file=<patch_name>", "spec: extraMounts: - extraVol: - extraVolType: Ceph mounts: - mountPath: /etc/ceph name: ceph readOnly: true propagation: - CinderVolume - CinderBackup - Glance volumes: - name: ceph projected: sources: - secret: name: ceph-conf-files cinder: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: cinder secret: osp-secret cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer replicas: 1 customServiceConfig: | [DEFAULT] default_volume_type=tripleo cinderScheduler: replicas: 0 cinderBackup: networkAttachments: - storage replicas: 0 cinderVolumes: ceph: networkAttachments: - storage replicas: 0", "openstack volume service list +------------------+------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+------------------------+------+---------+-------+----------------------------+ | cinder-scheduler | standalone.localdomain | nova | enabled | down | 2024-11-04T17:47:14.000000 | | cinder-backup | standalone.localdomain | nova | enabled | down | 2024-11-04T17:47:14.000000 | | cinder-volume | hostgroup@tripleo_ceph | nova | enabled | down | 2024-11-04T17:47:14.000000 | +------------------+------------------------+------+---------+-------+----------------------------+", "oc exec -t cinder-api-0 -c cinder-api -- cinder-manage service remove <service_binary> <service_host>", "oc patch openstackcontrolplane openstack --type=merge --patch-file=<patch_name>", "spec: cinder: enabled: true template: cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 1 customServiceConfig: | [DEFAULT] backup_driver=cinder.backup.drivers.ceph.CephBackupDriver backup_ceph_conf=/etc/ceph/ceph.conf backup_ceph_user=openstack backup_ceph_pool=backups cinderVolumes: ceph: networkAttachments: - storage replicas: 1 customServiceConfig: | [tripleo_ceph] backend_host=hostgroup volume_backend_name=tripleo_ceph volume_driver=cinder.volume.drivers.rbd.RBDDriver rbd_ceph_conf=/etc/ceph/ceph.conf rbd_user=openstack rbd_pool=volumes rbd_flatten_volume_from_snapshot=False report_discard_supported=True", "openstack volume service list +------------------+------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+------------------------+------+---------+-------+----------------------------+ | cinder-volume | hostgroup@tripleo_ceph | nova | enabled | up | 2023-06-28T17:00:03.000000 | | cinder-scheduler | cinder-scheduler-0 | nova | enabled | up | 2023-06-28T17:00:02.000000 | | cinder-backup | cinder-backup-0 | nova | enabled | up | 2023-06-28T17:00:01.000000 | +------------------+------------------------+------+---------+-------+----------------------------+", "oc exec -it cinder-scheduler-0 -- cinder-manage db online_data_migrations", "alias openstack=\"oc exec -t openstackclient -- openstack\"", "openstack endpoint list --service <endpoint>", "openstack volume service list", "openstack volume type list openstack volume list openstack volume snapshot list openstack volume backup list", "openstack volume create --image cirros --bootable --size 1 disk_new", "openstack --os-volume-api-version 3.47 volume create --backup <backup_name>", "oc patch openstackcontrolplane openstack --type=merge --patch ' spec: horizon: enabled: true apiOverride: route: {} template: memcachedInstance: memcached secret: osp-secret '", "oc get horizon", "PUBLIC_URL=USD(oc get horizon horizon -o jsonpath='{.status.endpoint}') curl --silent --output /dev/stderr --head --write-out \"%{http_code}\" \"USDPUBLIC_URL/dashboard/auth/login/?next=/dashboard/\" -k | grep 200", "spec: manila: enabled: true template: manilaAPI: customServiceConfig: | [oslo_policy] policy_file=/etc/manila/policy.yaml extraMounts: - extraVol: - extraVolType: Undefined mounts: - mountPath: /etc/manila/ name: policy readOnly: true propagation: - ManilaAPI volumes: - name: policy projected: sources: - configMap: name: manila-policy items: - key: policy path: policy.yaml", "spec: manila: enabled: true template: manilaAPI: customServiceConfig: | [DEFAULT] enabled_share_protocols = nfs replicas: 3 manilaScheduler: replicas: 3 manilaShares: netapp: customServiceConfig: | [DEFAULT] debug = true enabled_share_backends = netapp host = hostgroup [netapp] driver_handles_share_servers = False share_backend_name = netapp share_driver = manila.share.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_transport_type = http replicas: 1 pure: customServiceConfig: | [DEFAULT] debug = true enabled_share_backends=pure-1 host = hostgroup [pure-1] driver_handles_share_servers = False share_backend_name = pure-1 share_driver = manila.share.drivers.purestorage.flashblade.FlashBladeShareDriver flashblade_mgmt_vip = 203.0.113.15 flashblade_data_vip = 203.0.10.14 replicas: 1", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack spec: customContainerImages: cinderVolumeImages: pure: registry.connect.redhat.com/purestorage/openstack-manila-share-pure-rhosp-18-0", "cat << __EOF__ > ~/netapp_secrets.conf [netapp] netapp_server_hostname = 203.0.113.10 netapp_login = fancy_netapp_user netapp_password = secret_netapp_password netapp_vserver = mydatavserver __EOF__", "oc create secret generic osp-secret-manila-netapp --from-file=~/<secret> -n openstack", "spec: manila: enabled: true template: < . . . > manilaShares: netapp: customServiceConfig: | [DEFAULT] debug = true enabled_share_backends = netapp host = hostgroup [netapp] driver_handles_share_servers = False share_backend_name = netapp share_driver = manila.share.drivers.netapp.common.NetAppDriver netapp_storage_family = ontap_cluster netapp_transport_type = http customServiceConfigSecrets: - osp-secret-manila-netapp replicas: 1 < . . . >", "CONTROLLER1_SSH=\"ssh -i <path to SSH key> root@<node IP>\"", "CONTROLLER1_SSH cat /var/lib/config-data/puppet-generated/manila/etc/manila/manila.conf | awk '!/^ *#/ && NF' > ~/manila.conf", "cat << __EOF__ > ~/manila.patch spec: manila: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: manila secret: osp-secret manilaAPI: replicas: 3 1 customServiceConfig: | [DEFAULT] enabled_share_protocols = cephfs override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer manilaScheduler: replicas: 3 2 manilaShares: cephfs: replicas: 1 3 customServiceConfig: | [DEFAULT] enabled_share_backends = tripleo_ceph host = hostgroup [cephfs] driver_handles_share_servers=False share_backend_name=cephfs 4 share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_volume_mode=0755 cephfs_protocol_helper_type=CEPHFS networkAttachments: 5 - storage extraMounts: 6 - name: v1 region: r1 extraVol: - propagation: - ManilaShare extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: \"/etc/ceph\" readOnly: true __EOF__", "oc patch openstackcontrolplane openstack --type=merge --patch-file=~/<manila.patch>", "oc get pods -l service=manila", "openstack service list | grep manila", "openstack endpoint list | grep manila | 1164c70045d34b959e889846f9959c0e | regionOne | manila | share | True | internal | http://manila-internal.openstack.svc:8786/v1/%(project_id)s | | 63e89296522d4b28a9af56586641590c | regionOne | manilav2 | sharev2 | True | public | https://manila-public-openstack.apps-crc.testing/v2 | | af36c57adcdf4d50b10f484b616764cc | regionOne | manila | share | True | public | https://manila-public-openstack.apps-crc.testing/v1/%(project_id)s | | d655b4390d7544a29ce4ea356cc2b547 | regionOne | manilav2 | sharev2 | True | internal | http://manila-internal.openstack.svc:8786/v2 |", "openstack share service list openstack share pool list --detail", "openstack share list openstack share snapshot list", "cat << __EOF__ > ~/manila.patch spec: manila: enabled: true apiOverride: route: {} template: manilaShares: cephfs: replicas: 1 customServiceConfig: | [DEFAULT] enabled_share_backends = cephfs host = hostgroup [cephfs] driver_handles_share_servers=False share_backend_name=cephfs share_driver=manila.share.drivers.cephfs.driver.CephFSDriver cephfs_conf_path=/etc/ceph/ceph.conf cephfs_auth_id=openstack cephfs_cluster_name=ceph cephfs_protocol_helper_type=NFS cephfs_nfs_cluster_id=cephfs networkAttachments: - storage __EOF__", "oc patch openstackcontrolplane openstack --type=merge --patch-file=~/<manila.patch>", "sudo pcs resource disable ceph-nfs sudo pcs resource disable ip-<VIP> sudo pcs resource unmanage ceph-nfs sudo pcs resource unmanage ip-<VIP>", "USDCONTROLLER1_SSH cat /var/lib/config-data/puppet-generated/ironic/etc/ironic/ironic.conf > ironic.conf", "alias openstack=\"oc exec -t openstackclient -- openstack\"", "oc patch openstackcontrolplane openstack -n openstack --type=merge --patch ' spec: ironic: enabled: true template: rpcTransport: oslo databaseInstance: openstack ironicAPI: replicas: 1 override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 spec: type: LoadBalancer ironicConductors: - replicas: 1 networkAttachments: - baremetal provisionNetwork: baremetal storageRequest: 10G customServiceConfig: | [neutron] cleaning_network=<cleaning network uuid> provisioning_network=<provisioning network uuid> rescuing_network=<rescuing network uuid> inspection_network=<introspection network uuid> [conductor] automated_clean=true ironicInspector: replicas: 1 inspectionNetwork: baremetal networkAttachments: - baremetal dhcpRanges: - name: inspector-0 cidr: 172.20.1.0/24 start: 172.20.1.190 end: 172.20.1.199 gateway: 172.20.1.1 serviceUser: ironic-inspector databaseAccount: ironic-inspector passwordSelectors: database: IronicInspectorDatabasePassword service: IronicInspectorPassword ironicNeutronAgent: replicas: 1 rabbitMqClusterName: rabbitmq secret: osp-secret '", "oc wait --for condition=Ready --timeout=300s ironics.ironic.openstack.org ironic", "oc wait --for condition=Ready --timeout=300s ironicapis.ironic.openstack.org ironic-api oc wait --for condition=Ready --timeout=300s ironicconductors.ironic.openstack.org ironic-conductor oc wait --for condition=Ready --timeout=300s ironicinspectors.ironic.openstack.org ironic-inspector oc wait --for condition=Ready --timeout=300s ironicneutronagents.ironic.openstack.org ironic-ironic-neutron-agent", "openstack subnet set --dns-nameserver 192.168.122.80 provisioning-subnet", "openstack baremetal node list", "oc patch openstackcontrolplane openstack -n openstack --type=merge --patch ' spec: ironic: enabled: true template: databaseInstance: openstack ironicAPI: replicas: 1 customServiceConfig: | [oslo_policy] enforce_scope=false enforce_new_defaults=false '", "customServiceConfig: | [oslo_policy] enforce_scope=true enforce_new_defaults=true", "openstack baremetal node list -f uuid,provision_state,owner", "ADMIN_PROJECT_ID=USD(openstack project show -c id -f value --domain default admin) for node in USD(openstack baremetal node list -f json -c UUID -c Owner | jq -r '.[] | select(.Owner == null) | .UUID'); do openstack baremetal node set --owner USDADMIN_PROJECT_ID USDnode; done", "oc patch openstackcontrolplane openstack -n openstack --type=merge --patch ' spec: ironic: enabled: true template: databaseInstance: openstack ironicAPI: replicas: 1 customServiceConfig: | [oslo_policy] enforce_scope=true enforce_new_defaults=true '", "openstack endpoint list |grep ironic", "openstack baremetal node list", "[stack@rhosp17 ~]USD grep -E 'HeatPassword|HeatAuth|HeatStackDomainAdmin' ~/overcloud-deploy/overcloud/overcloud-passwords.yaml HeatAuthEncryptionKey: Q60Hj8PqbrDNu2dDCbyIQE2dibpQUPg2 HeatPassword: dU2N0Vr2bdelYH7eQonAwPfI3 HeatStackDomainAdminPassword: dU2N0Vr2bdelYH7eQonAwPfI3", "[stack@rhosp17 ~]USD ansible -i overcloud-deploy/overcloud/config-download/overcloud/tripleo-ansible-inventory.yaml overcloud-controller-0 -m shell -a \"grep auth_encryption_key /var/lib/config-data/puppet-generated/heat/etc/heat/heat.conf | grep -Ev '^#|^USD'\" -b overcloud-controller-0 | CHANGED | rc=0 >> auth_encryption_key=Q60Hj8PqbrDNu2dDCbyIQE2dibpQUPg2", "echo Q60Hj8PqbrDNu2dDCbyIQE2dibpQUPg2 | base64 UTYwSGo4UHFickROdTJkRENieUlRRTJkaWJwUVVQZzIK", "oc patch secret osp-secret --type='json' -p='[{\"op\" : \"replace\" ,\"path\" : \"/data/HeatAuthEncryptionKey\" ,\"value\" : \"UTYwSGo4UHFickROdTJkRENieUlRRTJkaWJwUVVQZzIK\"}]' secret/osp-secret patched", "oc patch openstackcontrolplane openstack --type=merge --patch ' spec: heat: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: heat secret: osp-secret memcachedInstance: memcached passwordSelectors: authEncryptionKey: HeatAuthEncryptionKey service: HeatPassword stackDomainAdminPassword: HeatStackDomainAdminPassword '", "oc get Heat,HeatAPI,HeatEngine,HeatCFNAPI NAME STATUS MESSAGE heat.heat.openstack.org/heat True Setup complete NAME STATUS MESSAGE heatapi.heat.openstack.org/heat-api True Setup complete NAME STATUS MESSAGE heatengine.heat.openstack.org/heat-engine True Setup complete NAME STATUS MESSAGE heatcfnapi.heat.openstack.org/heat-cfnapi True Setup complete", "oc exec -it openstackclient -- openstack service list -c Name -c Type +------------+----------------+ | Name | Type | +------------+----------------+ | heat | orchestration | | glance | image | | heat-cfn | cloudformation | | ceilometer | Ceilometer | | keystone | identity | | placement | placement | | cinderv3 | volumev3 | | nova | compute | | neutron | network | +------------+----------------+", "oc exec -it openstackclient -- openstack endpoint list --service=heat -f yaml - Enabled: true ID: 1da7df5b25b94d1cae85e3ad736b25a5 Interface: public Region: regionOne Service Name: heat Service Type: orchestration URL: http://heat-api-public-openstack-operators.apps.okd.bne-shift.net/v1/%(tenant_id)s - Enabled: true ID: 414dd03d8e9d462988113ea0e3a330b0 Interface: internal Region: regionOne Service Name: heat Service Type: orchestration URL: http://heat-api-internal.openstack-operators.svc:8004/v1/%(tenant_id)s", "oc exec -it openstackclient -- openstack orchestration service list -f yaml - Binary: heat-engine Engine ID: b16ad899-815a-4b0c-9f2e-e6d9c74aa200 Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:01.000000' - Binary: heat-engine Engine ID: 887ed392-0799-4310-b95c-ac2d3e6f965f Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:00.000000' - Binary: heat-engine Engine ID: 26ed9668-b3f2-48aa-92e8-2862252485ea Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:00.000000' - Binary: heat-engine Engine ID: 1011943b-9fea-4f53-b543-d841297245fd Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:01.000000'", "openstack stack list -f yaml - Creation Time: '2023-10-11T22:03:20Z' ID: 20f95925-7443-49cb-9561-a1ab736749ba Project: 4eacd0d1cab04427bc315805c28e66c9 Stack Name: test-networks Stack Status: CREATE_COMPLETE Updated Time: null", "oc create -f - <<EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-observability-operator namespace: openshift-operators spec: channel: stable installPlanApproval: Automatic name: cluster-observability-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc wait --for jsonpath=\"{.status.phase}\"=Succeeded csv --namespace=openshift-operators -l operators.coreos.com/cluster-observability-operator.openshift-operators", "oc patch openstackcontrolplane openstack --type=merge --patch ' spec: telemetry: enabled: true template: ceilometer: passwordSelector: ceilometerService: CeilometerPassword enabled: true secret: osp-secret serviceUser: ceilometer '", "oc patch openstackcontrolplane openstack --type=merge --patch ' spec: telemetry: template: metricStorage: enabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G '", "oc get pods -l alertmanager=metric-storage -n openstack NAME READY STATUS RESTARTS AGE alertmanager-metric-storage-0 2/2 Running 0 46s alertmanager-metric-storage-1 2/2 Running 0 46s oc get pods -l prometheus=metric-storage -n openstack NAME READY STATUS RESTARTS AGE prometheus-metric-storage-0 3/3 Running 0 46s", "CEILOMETETR_POD=`oc get pods -l service=ceilometer -n openstack | tail -n 1 | cut -f 1 -d' '` exec -t USDCEILOMETETR_POD -c ceilometer-central-agent -- cat /etc/ceilometer/ceilometer.conf", "oc get secret ceilometer-config-data -o jsonpath=\"{.data['polling\\.yaml\\.j2']}\" | base64 -d", "oc patch openstackcontrolplane controlplane --type=merge --patch ' spec: telemetry: template: ceilometer: defaultConfigOverwrite: polling.yaml.j2: | --- sources: - name: pollsters interval: 100 meters: - volume.* - image.size enabled: true secret: osp-secret '", "oc patch openstackcontrolplane openstack --type=merge --patch ' spec: telemetry: template: logging: enabled: false ipaddr: 172.17.0.80 port: 10514 cloNamespace: openshift-logging '", "oc patch openstackcontrolplane openstack --type=merge --patch ' spec: telemetry: enabled: true template: autoscaling: enabled: true aodh: passwordSelector: aodhService: AodhPassword databaseAccount: aodh databaseInstance: openstack secret: osp-secret serviceUser: aodh heatInstance: heat '", "AODH_POD=`oc get pods -l service=aodh -n openstack | tail -n 1 | cut -f 1 -d' '` oc exec -t USDAODH_POD -c aodh-api -- cat /etc/aodh/aodh.conf", "openstack endpoint list | grep aodh | d05d120153cd4f9b8310ac396b572926 | regionOne | aodh | alarming | True | internal | http://aodh-internal.openstack.svc:8042 | | d6daee0183494d7a9a5faee681c79046 | regionOne | aodh | alarming | True | public | http://aodh-public.openstack.svc:8042 |", "openstack alarm create --name high_cpu_alarm --type prometheus --query \"(rate(ceilometer_cpu{resource_name=~'cirros'})) * 100\" --alarm-action 'log://' --granularity 15 --evaluation-periods 3 --comparison-operator gt --threshold 7000000000", "openstack alarm list +--------------------------------------+------------+------------------+-------------------+----------+ | alarm_id | type | name | state | severity | enabled | +--------------------------------------+------------+------------------+-------------------+----------+ | 209dc2e9-f9d6-40e5-aecc-e767ce50e9c0 | prometheus | prometheus_alarm | ok | low | True | +--------------------------------------+------------+------------------+-------------------+----------+", "ssh_cmd=ssh -F ssh.config standalone container_engine=podman connection=ssh remote_config_path=/tmp/tripleo", "chown ospng:ospng /etc/os-diff/config.yaml", "service name and file location services: # Service name keystone: # Bool to enable/disable a service (not implemented yet) enable: true # Pod name, in both OCP and podman context. # It could be strict match or will only just grep the podman_name # and work with all the pods which matched with pod_name. # To enable/disable use strict_pod_name_match: true/false podman_name: keystone pod_name: keystone container_name: keystone-api # pod options # strict match for getting pod id in TripleO and podman context strict_pod_name_match: false # Path of the config files you want to analyze. # It could be whatever path you want: # /etc/<service_name> or /etc or /usr/share/<something> or even / # @TODO: need to implement loop over path to support multiple paths such as: # - /etc # - /usr/share path: - /etc/ - /etc/keystone - /etc/keystone/keystone.conf - /etc/keystone/logging.conf", "services: ovs_external_ids: hosts: 1 - standalone service_command: \"ovs-vsctl list Open_vSwitch . | grep external_ids | awk -F ': ' '{ print USD2; }'\" 2 cat_output: true 3 path: - ovs_external_ids.json config_mapping: 4 ovn-bridge-mappings: edpm_ovn_bridge_mappings 5 ovn-bridge: edpm_ovn_bridge ovn-encap-type: edpm_ovn_encap_type ovn-monitor-all: ovn_monitor_all ovn-remote-probe-interval: edpm_ovn_remote_probe_interval ovn-ofctrl-wait-before-clear: edpm_ovn_ofctrl_wait_before_clear", "os-diff diff ovs_external_ids.json edpm.crd --crd --service ovs_external_ids", "services: yum_config: hosts: - undercloud - controller_1 - compute_1 - compute_2 service_command: \"cat /etc/yum.conf\" cat_output: true path: - yum.conf", "will only update the /etc/os-diff/config.yaml os-diff pull --update-only", "will update the /etc/os-diff/config.yaml and pull configuration os-diff pull --update", "will update the /etc/os-diff/config.yaml and pull configuration os-diff pull", "/tmp/tripleo/", "▾ tmp/ ▾ tripleo/ ▾ glance/ ▾ keystone/", "ServicesToStart=(\"tripleo_horizon.service\" \"tripleo_keystone.service\" \"tripleo_barbican_api.service\" \"tripleo_barbican_worker.service\" \"tripleo_barbican_keystone_listener.service\" \"tripleo_cinder_api.service\" \"tripleo_cinder_api_cron.service\" \"tripleo_cinder_scheduler.service\" \"tripleo_cinder_volume.service\" \"tripleo_cinder_backup.service\" \"tripleo_glance_api.service\" \"tripleo_manila_api.service\" \"tripleo_manila_api_cron.service\" \"tripleo_manila_scheduler.service\" \"tripleo_neutron_api.service\" \"tripleo_placement_api.service\" \"tripleo_nova_api_cron.service\" \"tripleo_nova_api.service\" \"tripleo_nova_conductor.service\" \"tripleo_nova_metadata.service\" \"tripleo_nova_scheduler.service\" \"tripleo_nova_vnc_proxy.service\" \"tripleo_aodh_api.service\" \"tripleo_aodh_api_cron.service\" \"tripleo_aodh_evaluator.service\" \"tripleo_aodh_listener.service\" \"tripleo_aodh_notifier.service\" \"tripleo_ceilometer_agent_central.service\" \"tripleo_ceilometer_agent_compute.service\" \"tripleo_ceilometer_agent_ipmi.service\" \"tripleo_ceilometer_agent_notification.service\" \"tripleo_ovn_cluster_north_db_server.service\" \"tripleo_ovn_cluster_south_db_server.service\" \"tripleo_ovn_cluster_northd.service\") PacemakerResourcesToStart=(\"galera-bundle\" \"haproxy-bundle\" \"rabbitmq-bundle\" \"openstack-cinder-volume\" \"openstack-cinder-backup\" \"openstack-manila-share\") echo \"Starting systemd OpenStack services\" for service in USD{ServicesToStart[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLERUSD{i}_SSH if [ ! -z \"USD{!SSH_CMD}\" ]; then if USD{!SSH_CMD} sudo systemctl is-enabled USDservice &> /dev/null; then echo \"Starting the USDservice in controller USDi\" USD{!SSH_CMD} sudo systemctl start USDservice fi fi done done echo \"Checking systemd OpenStack services\" for service in USD{ServicesToStart[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLERUSD{i}_SSH if [ ! -z \"USD{!SSH_CMD}\" ]; then if USD{!SSH_CMD} sudo systemctl is-enabled USDservice &> /dev/null; then if ! USD{!SSH_CMD} systemctl show USDservice | grep ActiveState=active >/dev/null; then echo \"ERROR: Service USDservice is not running on controller USDi\" else echo \"OK: Service USDservice is running in controller USDi\" fi fi fi done done echo \"Starting pacemaker OpenStack services\" for i in {1..3}; do SSH_CMD=CONTROLLERUSD{i}_SSH if [ ! -z \"USD{!SSH_CMD}\" ]; then echo \"Using controller USDi to run pacemaker commands\" for resource in USD{PacemakerResourcesToStart[*]}; do if USD{!SSH_CMD} sudo pcs resource config USDresource &>/dev/null; then echo \"Starting USDresource\" USD{!SSH_CMD} sudo pcs resource enable USDresource else echo \"Service USDresource not present\" fi done break fi done echo \"Checking pacemaker OpenStack services\" for i in {1..3}; do SSH_CMD=CONTROLLERUSD{i}_SSH if [ ! -z \"USD{!SSH_CMD}\" ]; then echo \"Using controller USDi to run pacemaker commands\" for resource in USD{PacemakerResourcesToStop[*]}; do if USD{!SSH_CMD} sudo pcs resource config USDresource &>/dev/null; then if USD{!SSH_CMD} sudo pcs resource status USDresource | grep Started >/dev/null; then echo \"OK: Service USDresource is started\" else echo \"ERROR: Service USDresource is stopped\" fi fi done break fi done", "sudo pcs constraint order start ceph-nfs then openstack-manila-share kind=Optional id=order-ceph-nfs-openstack-manila-share-Optional sudo pcs constraint colocation add openstack-manila-share with ceph-nfs score=INFINITY id=colocation-openstack-manila-share-ceph-nfs-INFINITY", "oc delete --ignore-not-found=true --wait=false openstackcontrolplane/openstack oc patch openstackcontrolplane openstack --type=merge --patch ' metadata: finalizers: [] ' || true while oc get pod | grep rabbitmq-server-0; do sleep 2 done while oc get pod | grep openstack-galera-0; do sleep 2 done oc delete --ignore-not-found=true --wait=false pod mariadb-copy-data oc delete --ignore-not-found=true --wait=false pvc mariadb-data oc delete --ignore-not-found=true --wait=false pod ovn-copy-data oc delete --ignore-not-found=true secret osp-secret" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/adopting_a_red_hat_openstack_platform_17.1_deployment/adopting-openstack-control-plane-services_configuring-network
Chapter 4. Using an integrated DNS service
Chapter 4. Using an integrated DNS service The Red Hat OpenStack Platform (RHOSP) DNS service (designate) integrates with the Networking service (neutron) to provide automatic record set creation for ports and through the Compute service (nova), virtual machine instances. Cloud administrators use the DNS service to create a zone which they associate to a network. Using this network provided by their cloud administrator, cloud users can create a virtual machine instance, port, or floating IP and the DNS service automatically creates the necessary DNS records. During DNS service deployment the installation toolset, RHOSP director, loads the Networking service (neutron) extension, dns_domain_ports . This extension enables you to add the following DNS attributes to RHOSP ports, networks, and floating IPs: Table 4.1. DNS settings supported by the RHOSP Networking and DNS services Resource DNS name DNS domain (zone) Ports Yes Yes Networks No Yes Floating IPs Yes Yes Note For DNS domains that are specified on both a network and a floating IP, the domain on the port of the floating IP takes precedence over the domain set on the network. Important In Red Hat OpenStack Platform (RHOSP) 17.0 GA, a technology preview is available for integration between the RHOSP Networking service (neutron) ML2/OVN and the RHOSP DNS service (designate). As a result, the DNS service does not automatically add DNS entries for newly created VMs. The topics included in this section are: Section 4.1, "Setting up a project for DNS integration" Section 4.2, "Integrating virtual machine instances with DNS" Section 4.3, "Integrating ports with DNS" Section 4.4, "Integrating floating IPs with DNS" 4.1. Setting up a project for DNS integration Cloud administrators create the required zones, networks, and subnets that cloud users must specify when they create virtual machine instances, ports, or floating IPs. Because the RHOSP Networking service (neutron) is integrated with the DNS service (designate), when cloud users create these objects, they are automatically added to the DNS service. Important This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Prerequisites You must be a RHOSP user with the admin role. The network used for ports and VMs cannot have the attribute router:external set to True . When creating the network, the --external option must not be specified. The network must be one of the following types: FLAT, VLAN, GRE, VXLAN or GENEVE. For VLAN, GRE, VXLAN, or GENEVE networks, the segmentation ID must be outside the ranges configured in the Networking service ml2_conf.ini file. The ml2_conf.ini file resides on the Controller node host in /var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/ml2 .+ Use the following table for determining which section and option to consult for your network segmentation ID range: Table 4.2. ml2_conf.ini options used to set network segmentation IDs Type of network Section Option Geneve [ml2_type_geneve] vni_ranges GRE [ml2_type_gre] tunnel_id_ranges VLAN [ml2_type_vlan] network_vlan_ranges VXLAN [ml2_type_vxlan] vni_ranges Note If these prerequisites are not all met, the Networking service creates a DNS assignment in the internal resolvers using the default dns_domain value, openstacklocal. . Procedure As a cloud administrator, source your credentials file. Example Create the zone that you want users in a particular project to create DNS entries with. Example In this example, the cloud administrator creates a zone called example.com. and specifies that users in the project ID, f75ec24a-d361-ab86-54c0-dfe6093245a3 , have permission to add record sets to the zone: Note The DNS domain must always be a fully qualified domain name (FQDN), meaning it will always end with a period. Create the network that you want users in a particular project to create DNS entries with. Example In this example, the cloud administrator creates a network, example-network , that uses the earlier created zone, example.com. , and a segmentation ID, 2017 , that is outside of the range defined in ml2_conf.ini: On the network, create a subnet. Example In this example, the cloud administrator creates a subnet, example-subnet , on the network, example-network : Instruct the cloud users in the project to use the zone and network you have created when they add instances, ports, and floating IPs. Warning If the user creating the instance, port, or floating IP does not have permission to create record sets in the zone, or if the zone does not exist in the DNS service, the Networking service does the following: creates the port with the dns_assignment field populated using the dns_domain provided. does not create a record set in the DNS service. logs the error, "Error publishing port data in external DNS service.". Verification Confirm that the network you created exists. Example Sample output Additional resources zone in the Command Line Interface Reference network in the Command Line Interface Reference subnet in the Command Line Interface Reference 4.2. Integrating virtual machine instances with DNS Integration between the Networking service (neutron) and the DNS service (designate) enables you to automatically enable DNS whenever you create a virtual machine instance. Prerequisites Your cloud administrator has provided you with the required network to use, when creating your DNS-enabled instances. Procedure Source your credentials file. Example Using the network that your cloud administrator has provided, create an instance. Example In this example, the cloud user creates an instance named my_vm : Verification Confirm that a record exists in the DNS service for the instance you created. Example In this example, the DNS service is queried for the example.com. zone: Sample output Additional resources server create in the Command Line Interface Reference 4.3. Integrating ports with DNS Integration between the Networking service (neutron) and the DNS service (designate) enables you to automatically add a DNS record set whenever you create a port. Prerequisites Your cloud administrator has provided you with the required network to use, when creating your DNS-enabled ports. Procedure Source your credentials file. Example Using the zone and network that your cloud administrator has provided, create a port. Example In this example, the cloud user creates a port, my-port , with a DNS name of example-port in the network, example-network : Verification Confirm that a record exists in the DNS service for the port that you created. Example In this example, the DNS service is queried for the example.com. zone: Sample output Additional resources port create in the Command Line Interface Reference 4.4. Integrating floating IPs with DNS Integration between the Networking service (neutron) and the DNS service (designate) enables you to automatically add a DNS record set whenever you create a floating IP. Prerequisites Your cloud administrator has provided you with the required external network to use, when creating your DNS-enabled floating IPs. Procedure Source your credentials file. Example Using the zone and the external network that your cloud administrator has provided, create a floating IP. Example In this example, the cloud user creates a floating IP with a DNS name, example-fip , in the network, public : Verification Confirm that a record exists in the DNS service for the floating IP that you created. Example In this example, the DNS service is queried for the example.com. zone: Sample output Additional resources floating ip create in the Command Line Interface Reference
[ "source ~/overcloudrc", "openstack zone create --email [email protected] example.com. --sudo-project-id f75ec24a-d361-ab86-54c0-dfe6093245a3", "openstack network create --dns-domain example.com. --provider-segment 2017 --provider-network-type geneve example-network", "openstack subnet create --allocation-pool start=192.0.2.10,end=192.0.2.200 --network example-network --subnet-range 192.0.2.0/24 example-subnet", "openstack network show example-network", "+---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2022-09-07T19:03:32Z | | description | | | dns_domain | example.com. | | id | 9ae5b3d5-f12c-4a67-b0e5-655d53cd4f7c | | ipv4_address_scope | None | | ipv6_address_scope | None | | is_default | None | | is_vlan_transparent | None | | mtu | 1450 | | name | network-example | | port_security_enabled | True | | project_id | f75ec24a-d361-ab86-54c0-dfe6093245a3 | | provider:network_type | vxlan | | provider:physical_network | None | | provider:segmentation_id | 2017 | | qos_policy_id | None | | revision_number | 3 | | router:external | Internal | | segments | None | | shared | False | | status | ACTIVE | | subnets | 15546c9d-6faf-43aa-83e7-b1e705eed060 | | tags | | | updated_at | 2022-09-07T19:03:43Z | +---------------------------+--------------------------------------+", "source ~/overcloudrc", "openstack server create --image cirros-0.5.2-x86_64-disk --flavor m1.micro --nic net-id=example-network my_vm", "openstack recordset list --type A example.com.", "+---------------+---------------------+------+------------+--------+--------+ | id | name | type | records | status | action | +---------------+---------------------+------+------------+--------+--------+ | 7b8d1be6-1b23 | my_vm.example.com. | A | 192.0.2.44 | ACTIVE | NONE | | -478a-94d5-60 | | | | | | | b876dca2c8 | | | | | | +---------------+---------------------+------+------------+--------+--------+", "source ~/overcloudrc", "openstack port create --network example-network --dns-name example-port my-port", "openstack recordset list --type A example.com.", "+---------------+---------------------------+------+-------------+--------+--------+ | id | name | type | records | status | action | +---------------+---------------------------+------+-------------+--------+--------+ | 9ebbe94f-2442 | example-port.example.com. | A | 192.0.2.149 | ACTIVE | NONE | | -4bb8-9cfa-6d | | | | | | | ca1daba73f | | | | | | +---------------+---------------------------+------+-------------+--------+--------+", "source ~/overcloudrc", "openstack floating ip create --dns-name example-fip --dns-domain example.com. public", "openstack recordset list --type A example.com.", "+---------------+--------------------------+------+-------------+--------+--------+ | id | name | type | records | status | action | +---------------+--------------------------+------+-------------+--------+--------+ | e1eca823-169d | example-fip.example.com. | A | 192.0.2.106 | ACTIVE | NONE | | -4d0a-975e-91 | | | | | | | a9907ec0c1 | | | | | | +---------------+--------------------------+------+-------------+--------+--------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/using_designate_for_dns-as-a-service/use-integrate-dns-service_rhosp-dnsaas
A.4. Host Problems
A.4. Host Problems A.4.1. Certificate Not Found/Serial Number Not Found Errors The IdM information is stored in a separate LDAP directory than the certificate information, and these two LDAP databases are replicated separately. It is possible for a replication agreement to be broken for one directory and working for another, which can cause problems with managing clients. Specifically, if the replication agreement between the two CA databases is broken, then a server may not be able to find certificate information about a valid IdM client, causing certificate errors: For example, an IdM server and replica have a function replication agreement between their IdM databases, but the replication agreement between their CA databases is broken. If a host is created on the server, the host entry is replicated over to the replica - but the certificate for that host is not replicated. The replica is aware of the client, but any management operations for that client will fail because the replica doesn't have a copy of its certificate. A.4.2. Debugging Client Connection Problems Client connection problems are apparent immediately. This can mean that users cannot log into a machine or attempts to access user and group information fail (for example, getent passwd admin ). Authentication in IdM is managed with the SSSD daemon, which is described in the Red Hat Enterprise Linux Deployment Guide . If there are problems with client authentication, then check the SSSD information. First, check the SSSD logs in /var/log/sssd/ . There is a specific log file for the DNS domain, such as sssd_example.com.log . If there is not enough information in the logs at the default logging level, then increase the log level. To increase the log level: Open the sssd.conf file. In the [domain/ example.com ] section, set debug_level . Restart the sssd daemon. Check the /var/log/sssd/sssd_example.com.log file for the debug messages.
[ "Certificate operation cannot be completed: EXCEPTION (Certificate serial number 0x2d not found)", "vim /etc/sssd/sssd.conf", "debug_level = 9", "service sssd restart" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/hosts-troubleshooting
25.6. Configuring an FCoE Interface to Automatically Mount at Boot
25.6. Configuring an FCoE Interface to Automatically Mount at Boot Note The instructions in this section are available in /usr/share/doc/fcoe-utils- version /README as of Red Hat Enterprise Linux 6.1. Refer to that document for any possible changes throughout minor releases. You can mount newly discovered disks via udev rules, autofs , and other similar methods. Sometimes, however, a specific service might require the FCoE disk to be mounted at boot-time. In such cases, the FCoE disk should be mounted as soon as the fcoe service runs and before the initiation of any service that requires the FCoE disk. To configure an FCoE disk to automatically mount at boot, add proper FCoE mounting code to the startup script for the fcoe service. The fcoe startup script is /lib/systemd/system/fcoe.service . The FCoE mounting code is different per system configuration, whether you are using a simple formatted FCoE disk, LVM, or multipathed device node. Example 25.2. FCoE Mounting Code The following is a sample FCoE mounting code for mounting file systems specified via wild cards in /etc/fstab : The mount_fcoe_disks_from_fstab function should be invoked after the fcoe service script starts the fcoemon daemon. This will mount FCoE disks specified by the following paths in /etc/fstab : Entries with fc- and _netdev sub-strings enable the mount_fcoe_disks_from_fstab function to identify FCoE disk mount entries. For more information on /etc/fstab entries, refer to man 5 fstab . Note The fcoe service does not implement a timeout for FCoE disk discovery. As such, the FCoE mounting code should implement its own timeout period.
[ "mount_fcoe_disks_from_fstab() { local timeout=20 local done=1 local fcoe_disks=(USD(egrep 'by-path\\/fc-.*_netdev' /etc/fstab | cut -d ' ' -f1)) test -z USDfcoe_disks && return 0 echo -n \"Waiting for fcoe disks . \" while [ USDtimeout -gt 0 ]; do for disk in USD{fcoe_disks[*]}; do if ! test -b USDdisk; then done=0 break fi done test USDdone -eq 1 && break; sleep 1 echo -n \". \" done=1 let timeout-- done if test USDtimeout -eq 0; then echo \"timeout!\" else echo \"done!\" fi # mount any newly discovered disk mount -a 2>/dev/null }", "/dev/disk/by-path/fc-0xXX:0xXX /mnt/fcoe-disk1 ext3 defaults,_netdev 0 0 /dev/disk/by-path/fc-0xYY:0xYY /mnt/fcoe-disk2 ext3 defaults,_netdev 0 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/fcoe-config-automount
Chapter 12. Expanding the cluster
Chapter 12. Expanding the cluster You can expand a cluster installed with the Assisted Installer by adding hosts using the user interface or the API. 12.1. Prerequisites You must have access to an Assisted Installer cluster. You must install the OpenShift CLI ( oc ). Ensure that all the required DNS records exist for the cluster that you are adding the worker node to. If you are adding a worker node to a cluster with multiple CPU architectures, you must ensure that the architecture is set to multi . If you are adding arm64 , IBM Power , or IBM zSystems compute nodes to an existing x86_64 cluster, use a platform that supports a mixed architecture. For details, see Installing a mixed architecture cluster Additional resources Installing with the Assisted Installer API Installing with the Assisted Installer UI Adding hosts with the Assisted Installer API Adding hosts with the Assisted Installer UI 12.2. Checking for multiple architectures When adding a node to a cluster with multiple architectures, ensure that the architecture setting is set to multi . Procedure Log in to the cluster using the CLI. Check the architecture setting: USD oc adm release info -o json | jq .metadata.metadata Ensure that the architecture setting is set to 'multi'. { "release.openshift.io/architecture": "multi" } 12.3. Adding hosts with the UI You can add hosts to clusters that were created using the Assisted Installer . Important Adding hosts to Assisted Installer clusters is only supported for clusters running OpenShift Container Platform version 4.11 and up. Procedure Log in to OpenShift Cluster Manager and click the cluster that you want to expand. Click Add hosts and download the discovery ISO for the new host, adding an SSH public key and configuring cluster-wide proxy settings as needed. Optional: Modify ignition files as needed. Boot the target host using the discovery ISO, and wait for the host to be discovered in the console. Select the host role. It can be either a worker or a control plane host. Start the installation. As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. When prompted, approve the pending CSRs to complete the installation. When the host is successfully installed, it is listed as a host in the cluster web console. Important New hosts will be encrypted using the same method as the original cluster. 12.4. Adding hosts with the API You can add hosts to clusters using the Assisted Installer REST API. Prerequisites Install the OpenShift Cluster Manager CLI ( ocm ). Log in to OpenShift Cluster Manager as a user with cluster creation privileges. Install jq . Ensure that all the required DNS records exist for the cluster that you want to expand. Procedure Authenticate against the Assisted Installer REST API and generate an API token for your session. The generated token is valid for 15 minutes only. Set the USDAPI_URL variable by running the following command: USD export API_URL=<api_url> 1 1 Replace <api_url> with the Assisted Installer API URL, for example, https://api.openshift.com Import the cluster by running the following commands: Set the USDCLUSTER_ID variable. Log in to the cluster and run the following command: USD export CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}') Set the USDCLUSTER_REQUEST variable that is used to import the cluster: USD export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id "USDCLUSTER_ID" '{ "api_vip_dnsname": "<api_vip>", 1 "openshift_cluster_id": USDCLUSTER_ID, "name": "<openshift_cluster_name>" 2 }') 1 Replace <api_vip> with the hostname for the cluster's API server. This can be the DNS domain for the API server or the IP address of the single node which the host can reach. For example, api.compute-1.example.com . 2 Replace <openshift_cluster_name> with the plain text name for the cluster. The cluster name should match the cluster name that was set during the Day 1 cluster installation. Import the cluster and set the USDCLUSTER_ID variable. Run the following command: USD CLUSTER_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/clusters/import" -H "Authorization: Bearer USD{API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' \ -d "USDCLUSTER_REQUEST" | tee /dev/stderr | jq -r '.id') Generate the InfraEnv resource for the cluster and set the USDINFRA_ENV_ID variable by running the following commands: Download the pull secret file from Red Hat OpenShift Cluster Manager at console.redhat.com . Set the USDINFRA_ENV_REQUEST variable: export INFRA_ENV_REQUEST=USD(jq --null-input \ --slurpfile pull_secret <path_to_pull_secret_file> \ 1 --arg ssh_pub_key "USD(cat <path_to_ssh_pub_key>)" \ 2 --arg cluster_id "USDCLUSTER_ID" '{ "name": "<infraenv_name>", 3 "pull_secret": USDpull_secret[0] | tojson, "cluster_id": USDcluster_id, "ssh_authorized_key": USDssh_pub_key, "image_type": "<iso_image_type>" 4 }') 1 Replace <path_to_pull_secret_file> with the path to the local file containing the downloaded pull secret from Red Hat OpenShift Cluster Manager at console.redhat.com . 2 Replace <path_to_ssh_pub_key> with the path to the public SSH key required to access the host. If you do not set this value, you cannot access the host while in discovery mode. 3 Replace <infraenv_name> with the plain text name for the InfraEnv resource. 4 Replace <iso_image_type> with the ISO image type, either full-iso or minimal-iso . Post the USDINFRA_ENV_REQUEST to the /v2/infra-envs API and set the USDINFRA_ENV_ID variable: USD INFRA_ENV_ID=USD(curl "USDAPI_URL/api/assisted-install/v2/infra-envs" -H "Authorization: Bearer USD{API_TOKEN}" -H 'accept: application/json' -H 'Content-Type: application/json' -d "USDINFRA_ENV_REQUEST" | tee /dev/stderr | jq -r '.id') Get the URL of the discovery ISO for the cluster host by running the following command: USD curl -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -r '.download_url' Example output https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12 Download the ISO: USD curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1 1 Replace <iso_url> with the URL for the ISO from the step. Boot the new worker host from the downloaded rhcos-live-minimal.iso . Get the list of hosts in the cluster that are not installed. Keep running the following command until the new host shows up: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -r '.hosts[] | select(.status != "installed").id' Example output 2294ba03-c264-4f11-ac08-2f1bb2f8c296 Set the USDHOST_ID variable for the new host, for example: USD HOST_ID=<host_id> 1 1 Replace <host_id> with the host ID from the step. Check that the host is ready to install by running the following command: Note Ensure that you copy the entire command including the complete jq expression. USD curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H "Authorization: Bearer USD{API_TOKEN}" | jq ' def host_name(USDhost): if (.suggested_hostname // "") == "" then if (.inventory // "") == "" then "Unknown hostname, please wait" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): ["failure", "pending", "error"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // "{}" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { "Hosts validations": { "Hosts": [ .hosts[] | select(.status != "installed") | { "id": .id, "name": host_name(.), "status": .status, "notable_validations": notable_validations(.validations_info) } ] }, "Cluster validations info": { "notable_validations": notable_validations(.validations_info) } } ' -r Example output { "Hosts validations": { "Hosts": [ { "id": "97ec378c-3568-460c-bc22-df54534ff08f", "name": "localhost.localdomain", "status": "insufficient", "notable_validations": [ { "id": "ntp-synced", "status": "failure", "message": "Host couldn't synchronize with any NTP server" }, { "id": "api-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "api-int-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" }, { "id": "apps-domain-name-resolved-correctly", "status": "error", "message": "Parse error for domain name resolutions result" } ] } ] }, "Cluster validations info": { "notable_validations": [] } } When the command shows that the host is ready, start the installation using the /v2/infra-envs/{infra_env_id}/hosts/{host_id}/actions/install API by running the following command: USD curl -X POST -s "USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install" -H "Authorization: Bearer USD{API_TOKEN}" As the installation proceeds, the installation generates pending certificate signing requests (CSRs) for the host. Important You must approve the CSRs to complete the installation. Keep running the following API call to monitor the cluster installation: USD curl -s "USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq '{ "Cluster day-2 hosts": [ .hosts[] | select(.status != "installed") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }' Example output { "Cluster day-2 hosts": [ { "id": "a1c52dde-3432-4f59-b2ae-0a530c851480", "requested_hostname": "control-plane-1", "status": "added-to-existing-cluster", "status_info": "Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs", "progress": { "current_stage": "Done", "installation_percentage": 100, "stage_started_at": "2022-07-08T10:56:20.476Z", "stage_updated_at": "2022-07-08T10:56:20.476Z" }, "status_updated_at": "2022-07-08T10:56:20.476Z", "updated_at": "2022-07-08T10:57:15.306369Z", "infra_env_id": "b74ec0c3-d5b5-4717-a866-5b6854791bd3", "cluster_id": "8f721322-419d-4eed-aa5b-61b50ea586ae", "created_at": "2022-07-06T22:54:57.161614Z" } ] } Optional: Run the following command to see all the events for the cluster: USD curl -s "USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID" -H "Authorization: Bearer USD{API_TOKEN}" | jq -c '.[] | {severity, message, event_time, host_id}' Example output {"severity":"info","message":"Host compute-0: updated status from insufficient to known (Host is ready to be installed)","event_time":"2022-07-08T11:21:46.346Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from known to installing (Installation is in progress)","event_time":"2022-07-08T11:28:28.647Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing to installing-in-progress (Starting installation)","event_time":"2022-07-08T11:28:52.068Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae","event_time":"2022-07-08T11:29:47.802Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)","event_time":"2022-07-08T11:29:48.259Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} {"severity":"info","message":"Host: compute-0, reached installation stage Rebooting","event_time":"2022-07-08T11:29:48.261Z","host_id":"9d7b3b44-1125-4ad0-9b14-76550087b445"} Log in to the cluster and approve the pending CSRs to complete the installation. Verification Check that the new host was successfully added to the cluster with a status of Ready : USD oc get nodes Example output NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.25.0 compute-1.example.com Ready worker 11m v1.25.0 12.5. Installing a mixed-architecture cluster From OpenShift Container Platform version 4.12.0 and later, a cluster with an x86_64 control plane can support mixed-architecture worker nodes of two different CPU architectures. Mixed-architecture clusters combine the strengths of each architecture and support a variety of workloads. From version 4.12.0, you can add arm64 worker nodes to an existing OpenShift cluster with an x86_64 control plane. From version 4.14.0, you can add IBM Power or IBM zSystems worker nodes to an existing x86_64 control plane. The main steps of the installation are as follows: Create and register a multi-architecture cluster. Create an x86_64 infrastructure environment, download the ISO for x86_64 , and add the control plane. The control plane must have the x86_64 architecture. Create an arm64 , IBM Power or IBM zSystems infrastructure environment, download the ISO for arm64 , IBM Power or IBM zSystems , and add the worker nodes. These steps are detailed in the procedure below. Supported platforms The table below lists the platforms that support a mixed-architecture cluster for each OpenShift Container Platform version. Use the appropriate platforms for the version you are installing. OpenShift Container Platform version Supported platforms Day 1 control plane architecture Day 2 node architecture 4.12.0 Microsoft Azure (TP) x86_64 arm64 4.13.0 Microsoft Azure Amazon Web Services Bare Metal (TP) x86_64 x86_64 x86_64 arm64 arm64 arm64 4.14.0 Microsoft Azure Amazon Web Services Bare Metal Google Cloud Platform IBM(R) Power(R) IBM Z(R) x86_64 x86_64 x86_64 x86_64 x86_64 x86_64 arm64 arm64 arm64 arm64 ppc64le s390x Important Technology Preview (TP) features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Main steps Start the procedure for installing OpenShift Container Platform using the API. For details, see Installing with the Assisted Installer API in the Additional Resources section. When you reach the "Registering a new cluster" step of the installation, register the cluster as a multi-architecture cluster: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "<version-number>-multi", 1 "cpu_architecture" : "multi" 2 "high_availability_mode": "full" 3 "base_dns_domain": "example.com", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Note 1 Use the multi- option for the OpenShift version number; for example, "4.12-multi" . 2 Set the CPU architecture` to "multi" . 3 Use the full value to indicate Multi-Node OpenShift. When you reach the "Registering a new infrastructure environment" step of the installation, set cpu_architecture to x86_64 : USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt \ --arg cluster_id USD{CLUSTER_ID} ' { "name": "testcluster-infra-env", "image_type":"full-iso", "cluster_id": USDcluster_id, "cpu_architecture" : "x86_64" "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' When you reach the "Adding hosts" step of the installation, set host_role to master : Note For more information, see Assigning Roles to Hosts in Additional Resources . USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"master" } ' | jq Download the discovery image for the x86_64 architecture. Boot the x86_64 architecture hosts using the generated discovery image. Start the installation and wait for the cluster to be fully installed. Repeat the "Registering a new infrastructure environment" step of the installation. This time, set cpu_architecture to one of the following: ppc64le (for IBM Power), s390x (for IBM Z), or arm64 . For example: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.12", "cpu_architecture" : "arm64" "high_availability_mode": "full" "base_dns_domain": "example.com", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Repeat the "Adding hosts" step of the installation. This time, set host_role to worker : Note For more details, see Assigning Roles to Hosts in Additional Resources . USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"worker" } ' | jq Download the discovery image for the arm64 , ppc64le or s390x architecture. Boot the architecture hosts using the generated discovery image. Start the installation and wait for the cluster to be fully installed. Verification View the arm64 , ppc64le or s390x worker nodes in the cluster by running the following command: USD oc get nodes -o wide 12.6. Installing a primary control plane node on a healthy cluster This procedure describes how to install a primary control plane node on a healthy OpenShift Container Platform cluster. If the cluster is unhealthy, additional operations are required before they can be managed. See Additional Resources for more information. Prerequisites You are using OpenShift Container Platform 4.11 or newer with the correct etcd-operator version. You have installed a healthy cluster with a minimum of three nodes. You have assigned role: master to a single node. Procedure Review and approve CSRs Review the CertificateSigningRequests (CSRs): USD oc get csr | grep Pending Example output csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending Approve all pending CSRs: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Important You must approve the CSRs to complete the installation. Confirm the primary node is in Ready status: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 4h42m v1.24.0+3882f8f worker-1 Ready worker 4h29m v1.24.0+3882f8f master-2 Ready master 4h43m v1.24.0+3882f8f master-3 Ready master 4h27m v1.24.0+3882f8f worker-4 Ready worker 4h30m v1.24.0+3882f8f master-5 Ready master 105s v1.24.0+3882f8f Note The etcd-operator requires a Machine Custom Resources (CRs) referencing the new node when the cluster runs with a functional Machine API. Link the Machine CR with BareMetalHost and Node : Create the BareMetalHost CR with a unique .metadata.name value": apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: custom-master3 namespace: openshift-machine-api annotations: spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api USD oc create -f <filename> Apply the BareMetalHost CR: USD oc apply -f <filename> Create the Machine CR using the unique .machine.name value: apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/custom-master3 finalizers: - machine.machine.openshift.io generation: 3 labels: machine.openshift.io/cluster-api-cluster: test-day2-1-6qv96 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: custom-master3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: "" url: "" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed USD oc create -f <filename> Apply the Machine CR: USD oc apply -f <filename> Link BareMetalHost , Machine , and Node using the link-machine-and-node.sh script: #!/bin/bash # Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. # This script will link Machine object and Node object. This is needed # in order to have IP address of the Node present in the status of the Machine. set -x set -e machine="USD1" node="USD2" if [ -z "USDmachine" -o -z "USDnode" ]; then echo "Usage: USD0 MACHINE NODE" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') oc proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH="http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name="USD1" url="USD2" timeout="USD3" shift 3 curl_opts="USD@" echo -n "Waiting for USDname to respond" start_time=USD(date +%s) until curl -g -X GET "USDurl" "USD{curl_opts[@]}" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n "." curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo "\nTimed out waiting for USDname" return 1 fi sleep 5 done echo " Success!" return 0 } wait_for_json oc_proxy "USD{HOST_PROXY_API_PATH}" 10 -H "Accept: application/json" -H "Content-Type: application/json" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo "USDmachine_data" | jq '.metadata.annotations["metal3.io/BareMetalHost"]' | cut -f2 -d/ | sed 's/"//g') if [ -z "USDhost" ]; then echo "Machine USDmachine is not linked to a host yet." 1>&2 exit 1 fi # The address structure on the host doesn't match the node, so extract # the values we want into separate variables so we can build the patch # we need. hostname=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "Hostname") | .address' | sed 's/"//g') ipaddr=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "InternalIP") | .address' | sed 's/"//g') host_patch=' { "status": { "hardware": { "hostname": "'USD{hostname}'", "nics": [ { "ip": "'USD{ipaddr}'", "mac": "00:00:00:00:00:00", "model": "unknown", "speedGbps": 10, "vlanId": 0, "pxe": true, "name": "eth1" } ], "systemVendor": { "manufacturer": "Red Hat", "productName": "product name", "serialNumber": "" }, "firmware": { "bios": { "date": "04/01/2014", "vendor": "SeaBIOS", "version": "1.11.0-2.el7" } }, "ramMebibytes": 0, "storage": [], "cpu": { "arch": "x86_64", "model": "Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz", "clockMegahertz": 2199.998, "count": 4, "flags": [] } } } } ' echo "PATCHING HOST" echo "USD{host_patch}" | jq . curl -s \ -X PATCH \ USD{HOST_PROXY_API_PATH}/USD{host}/status \ -H "Content-type: application/merge-patch+json" \ -d "USD{host_patch}" oc get baremetalhost -n openshift-machine-api -o yaml "USD{host}" USD bash link-machine-and-node.sh custom-master3 worker-5 Confirm etcd members: USD oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Confirm the etcd-operator configuration applies to all nodes: USD oc get clusteroperator etcd Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.11.5 True False False 5h54m Confirm etcd-operator health: USD oc rsh -n openshift-etcd etcd-worker-0 etcdctl endpoint health Example output 192.168.111.26 is healthy: committed proposal: took = 11.297561ms 192.168.111.25 is healthy: committed proposal: took = 13.892416ms 192.168.111.28 is healthy: committed proposal: took = 11.870755ms Confirm node health: USD oc get Nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 6h20m v1.24.0+3882f8f worker-1 Ready worker 6h7m v1.24.0+3882f8f master-2 Ready master 6h20m v1.24.0+3882f8f master-3 Ready master 6h4m v1.24.0+3882f8f worker-4 Ready worker 6h7m v1.24.0+3882f8f master-5 Ready master 99m v1.24.0+3882f8f Confirm the ClusterOperators health: USD oc get ClusterOperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MSG authentication 4.11.5 True False False 5h57m baremetal 4.11.5 True False False 6h19m cloud-controller-manager 4.11.5 True False False 6h20m cloud-credential 4.11.5 True False False 6h23m cluster-autoscaler 4.11.5 True False False 6h18m config-operator 4.11.5 True False False 6h19m console 4.11.5 True False False 6h4m csi-snapshot-controller 4.11.5 True False False 6h19m dns 4.11.5 True False False 6h18m etcd 4.11.5 True False False 6h17m image-registry 4.11.5 True False False 6h7m ingress 4.11.5 True False False 6h6m insights 4.11.5 True False False 6h12m kube-apiserver 4.11.5 True False False 6h16m kube-controller-manager 4.11.5 True False False 6h16m kube-scheduler 4.11.5 True False False 6h16m kube-storage-version-migrator 4.11.5 True False False 6h19m machine-api 4.11.5 True False False 6h15m machine-approver 4.11.5 True False False 6h19m machine-config 4.11.5 True False False 6h18m marketplace 4.11.5 True False False 6h18m monitoring 4.11.5 True False False 6h4m network 4.11.5 True False False 6h20m node-tuning 4.11.5 True False False 6h18m openshift-apiserver 4.11.5 True False False 6h8m openshift-controller-manager 4.11.5 True False False 6h7m openshift-samples 4.11.5 True False False 6h12m operator-lifecycle-manager 4.11.5 True False False 6h18m operator-lifecycle-manager-catalog 4.11.5 True False False 6h19m operator-lifecycle-manager-pkgsvr 4.11.5 True False False 6h12m service-ca 4.11.5 True False False 6h19m storage 4.11.5 True False False 6h19m Confirm the ClusterVersion : USD oc get ClusterVersion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 5h57m Cluster version is 4.11.5 Remove the old control plane node: Delete the BareMetalHost CR: USD oc delete bmh -n openshift-machine-api custom-master3 Confirm the Machine is unhealthy: USD oc get machine -A Example output NAMESPACE NAME PHASE AGE openshift-machine-api custom-master3 Running 14h openshift-machine-api test-day2-1-6qv96-master-0 Failed 20h openshift-machine-api test-day2-1-6qv96-master-1 Running 20h openshift-machine-api test-day2-1-6qv96-master-2 Running 20h openshift-machine-api test-day2-1-6qv96-worker-0-8w7vr Running 19h openshift-machine-api test-day2-1-6qv96-worker-0-rxddj Running 19h Delete the Machine CR: USD oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-0 machine.machine.openshift.io "test-day2-1-6qv96-master-0" deleted Confirm removal of the Node CR: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION worker-1 Ready worker 19h v1.24.0+3882f8f master-2 Ready master 20h v1.24.0+3882f8f master-3 Ready master 19h v1.24.0+3882f8f worker-4 Ready worker 19h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f Check etcd-operator logs to confirm status of the etcd cluster: USD oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf Example output E0927 07:53:10.597523 1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resource Remove the physical machine to allow etcd-operator to reconcile the cluster members: USD oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table; etcdctl endpoint health Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ 192.168.111.26 is healthy: committed proposal: took = 10.458132ms 192.168.111.25 is healthy: committed proposal: took = 11.047349ms 192.168.111.28 is healthy: committed proposal: took = 11.414402ms Additional resources Installing a primary control plane node on an unhealthy cluster 12.7. Installing a primary control plane node on an unhealthy cluster This procedure describes how to install a primary control plane node on an unhealthy OpenShift Container Platform cluster. Prerequisites You are using OpenShift Container Platform 4.11 or newer with the correct etcd-operator version. You have installed a healthy cluster with a minimum of two nodes. You have created the Day 2 control plane. You have assigned role: master to a single node. Procedure Confirm initial state of the cluster: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION worker-1 Ready worker 20h v1.24.0+3882f8f master-2 NotReady master 20h v1.24.0+3882f8f master-3 Ready master 20h v1.24.0+3882f8f worker-4 Ready worker 20h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f Confirm the etcd-operator detects the cluster as unhealthy: USD oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf Example output E0927 08:24:23.983733 1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, worker-2 is unhealthy Confirm the etcdctl members: USD oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Confirm that etcdctl reports an unhealthy member of the cluster: USD etcdctl endpoint health Example output {"level":"warn","ts":"2022-09-27T08:25:35.953Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000680380/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 12.465641ms 192.168.111.26 is healthy: committed proposal: took = 12.297059ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster Remove the unhealthy control plane by deleting the Machine Custom Resource: USD oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-2 Note The Machine and Node Custom Resources (CRs) will not be deleted if the unhealthy cluster cannot run successfully. Confirm that etcd-operator has not removed the unhealthy machine: USD oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf -f Example output I0927 08:58:41.249222 1 machinedeletionhooks.go:135] skip removing the deletion hook from machine test-day2-1-6qv96-master-2 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.26}] Remove the unhealthy etcdctl member manually: USD oc rsh -n openshift-etcd etcd-worker-3\ etcdctl member list -w table Example output +--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ Confirm that etcdctl reports an unhealthy member of the cluster: USD etcdctl endpoint health Example output {"level":"warn","ts":"2022-09-27T10:31:07.227Z","logger":"client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0000d6e00/192.168.111.25","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\""} 192.168.111.28 is healthy: committed proposal: took = 13.038278ms 192.168.111.26 is healthy: committed proposal: took = 12.950355ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster Remove the unhealthy cluster by deleting the etcdctl member Custom Resource: USD etcdctl member remove 61e2a86084aafa62 Example output Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7 Confirm members of etcdctl by running the following command: USD etcdctl member list -w table Example output +----------+---------+--------+--------------+--------------+-------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +----------+---------+--------+--------------+--------------+-------+ | 2c18942f | started |worker-3|192.168.111.26|192.168.111.26| false | | ead4f280 | started |worker-5|192.168.111.28|192.168.111.28| false | +----------+---------+--------+--------------+--------------+-------+ Review and approve Certificate Signing Requests Review the Certificate Signing Requests (CSRs): USD oc get csr | grep Pending Example output csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending Approve all pending CSRs: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note You must approve the CSRs to complete the installation. Confirm ready status of the control plane node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 17h v1.24.0+3882f8f master-6 Ready master 2m52s v1.24.0+3882f8f Validate the Machine , Node and BareMetalHost Custom Resources. The etcd-operator requires Machine CRs to be present if the cluster is running with the functional Machine API. Machine CRs are displayed during the Running phase when present. Create Machine Custom Resource linked with BareMetalHost and Node . Make sure there is a Machine CR referencing the newly added node. Important Boot-it-yourself will not create BareMetalHost and Machine CRs, so you must create them. Failure to create the BareMetalHost and Machine CRs will generate errors when running etcd-operator . Add BareMetalHost Custom Resource: USD oc create bmh -n openshift-machine-api custom-master3 Add Machine Custom Resource: USD oc create machine -n openshift-machine-api custom-master3 Link BareMetalHost , Machine , and Node by running the link-machine-and-node.sh script: #!/bin/bash # Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. # This script will link Machine object and Node object. This is needed # in order to have IP address of the Node present in the status of the Machine. set -x set -e machine="USD1" node="USD2" if [ -z "USDmachine" -o -z "USDnode" ]; then echo "Usage: USD0 MACHINE NODE" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') oc proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH="http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name="USD1" url="USD2" timeout="USD3" shift 3 curl_opts="USD@" echo -n "Waiting for USDname to respond" start_time=USD(date +%s) until curl -g -X GET "USDurl" "USD{curl_opts[@]}" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n "." curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo "\nTimed out waiting for USDname" return 1 fi sleep 5 done echo " Success!" return 0 } wait_for_json oc_proxy "USD{HOST_PROXY_API_PATH}" 10 -H "Accept: application/json" -H "Content-Type: application/json" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo "USDmachine_data" | jq '.metadata.annotations["metal3.io/BareMetalHost"]' | cut -f2 -d/ | sed 's/"//g') if [ -z "USDhost" ]; then echo "Machine USDmachine is not linked to a host yet." 1>&2 exit 1 fi # The address structure on the host doesn't match the node, so extract # the values we want into separate variables so we can build the patch # we need. hostname=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "Hostname") | .address' | sed 's/"//g') ipaddr=USD(echo "USD{addresses}" | jq '.[] | select(. | .type == "InternalIP") | .address' | sed 's/"//g') host_patch=' { "status": { "hardware": { "hostname": "'USD{hostname}'", "nics": [ { "ip": "'USD{ipaddr}'", "mac": "00:00:00:00:00:00", "model": "unknown", "speedGbps": 10, "vlanId": 0, "pxe": true, "name": "eth1" } ], "systemVendor": { "manufacturer": "Red Hat", "productName": "product name", "serialNumber": "" }, "firmware": { "bios": { "date": "04/01/2014", "vendor": "SeaBIOS", "version": "1.11.0-2.el7" } }, "ramMebibytes": 0, "storage": [], "cpu": { "arch": "x86_64", "model": "Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz", "clockMegahertz": 2199.998, "count": 4, "flags": [] } } } } ' echo "PATCHING HOST" echo "USD{host_patch}" | jq . curl -s \ -X PATCH \ USD{HOST_PROXY_API_PATH}/USD{host}/status \ -H "Content-type: application/merge-patch+json" \ -d "USD{host_patch}" oc get baremetalhost -n openshift-machine-api -o yaml "USD{host}" USD bash link-machine-and-node.sh custom-master3 worker-3 Confirm members of etcdctl by running the following command: USD oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table Example output +---------+-------+--------+--------------+--------------+-------+ | ID | STATUS| NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +---------+-------+--------+--------------+--------------+-------+ | 2c18942f|started|worker-3|192.168.111.26|192.168.111.26| false | | ead4f280|started|worker-5|192.168.111.28|192.168.111.28| false | | 79153c5a|started|worker-6|192.168.111.29|192.168.111.29| false | +---------+-------+--------+--------------+--------------+-------+ Confirm the etcd operator has configured all nodes: USD oc get clusteroperator etcd Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE etcd 4.11.5 True False False 22h Confirm health of etcdctl : USD oc rsh -n openshift-etcd etcd-worker-3 etcdctl endpoint health Example output 192.168.111.26 is healthy: committed proposal: took = 9.105375ms 192.168.111.28 is healthy: committed proposal: took = 9.15205ms 192.168.111.29 is healthy: committed proposal: took = 10.277577ms Confirm the health of the nodes: USD oc get Nodes Example output NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 18h v1.24.0+3882f8f master-6 Ready master 40m v1.24.0+3882f8f Confirm the health of the ClusterOperators : USD oc get ClusterOperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.5 True False False 150m baremetal 4.11.5 True False False 22h cloud-controller-manager 4.11.5 True False False 22h cloud-credential 4.11.5 True False False 22h cluster-autoscaler 4.11.5 True False False 22h config-operator 4.11.5 True False False 22h console 4.11.5 True False False 145m csi-snapshot-controller 4.11.5 True False False 22h dns 4.11.5 True False False 22h etcd 4.11.5 True False False 22h image-registry 4.11.5 True False False 22h ingress 4.11.5 True False False 22h insights 4.11.5 True False False 22h kube-apiserver 4.11.5 True False False 22h kube-controller-manager 4.11.5 True False False 22h kube-scheduler 4.11.5 True False False 22h kube-storage-version-migrator 4.11.5 True False False 148m machine-api 4.11.5 True False False 22h machine-approver 4.11.5 True False False 22h machine-config 4.11.5 True False False 110m marketplace 4.11.5 True False False 22h monitoring 4.11.5 True False False 22h network 4.11.5 True False False 22h node-tuning 4.11.5 True False False 22h openshift-apiserver 4.11.5 True False False 163m openshift-controller-manager 4.11.5 True False False 22h openshift-samples 4.11.5 True False False 22h operator-lifecycle-manager 4.11.5 True False False 22h operator-lifecycle-manager-catalog 4.11.5 True False False 22h operator-lifecycle-manager-pkgsvr 4.11.5 True False False 22h service-ca 4.11.5 True False False 22h storage 4.11.5 True False False 22h Confirm the ClusterVersion : USD oc get ClusterVersion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 22h Cluster version is 4.11.5 12.8. Additional resources Installing a primary control plane node on a healthy cluster Authenticating with the REST API
[ "oc adm release info -o json | jq .metadata.metadata", "{ \"release.openshift.io/architecture\": \"multi\" }", "export API_URL=<api_url> 1", "export CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')", "export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id \"USDCLUSTER_ID\" '{ \"api_vip_dnsname\": \"<api_vip>\", 1 \"openshift_cluster_id\": USDCLUSTER_ID, \"name\": \"<openshift_cluster_name>\" 2 }')", "CLUSTER_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/clusters/import\" -H \"Authorization: Bearer USD{API_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDCLUSTER_REQUEST\" | tee /dev/stderr | jq -r '.id')", "export INFRA_ENV_REQUEST=USD(jq --null-input --slurpfile pull_secret <path_to_pull_secret_file> \\ 1 --arg ssh_pub_key \"USD(cat <path_to_ssh_pub_key>)\" \\ 2 --arg cluster_id \"USDCLUSTER_ID\" '{ \"name\": \"<infraenv_name>\", 3 \"pull_secret\": USDpull_secret[0] | tojson, \"cluster_id\": USDcluster_id, \"ssh_authorized_key\": USDssh_pub_key, \"image_type\": \"<iso_image_type>\" 4 }')", "INFRA_ENV_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/infra-envs\" -H \"Authorization: Bearer USD{API_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDINFRA_ENV_REQUEST\" | tee /dev/stderr | jq -r '.id')", "curl -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -r '.download_url'", "https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=4.12", "curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1", "curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -r '.hosts[] | select(.status != \"installed\").id'", "2294ba03-c264-4f11-ac08-2f1bb2f8c296", "HOST_ID=<host_id> 1", "curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H \"Authorization: Bearer USD{API_TOKEN}\" | jq ' def host_name(USDhost): if (.suggested_hostname // \"\") == \"\" then if (.inventory // \"\") == \"\" then \"Unknown hostname, please wait\" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): [\"failure\", \"pending\", \"error\"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // \"{}\" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { \"Hosts validations\": { \"Hosts\": [ .hosts[] | select(.status != \"installed\") | { \"id\": .id, \"name\": host_name(.), \"status\": .status, \"notable_validations\": notable_validations(.validations_info) } ] }, \"Cluster validations info\": { \"notable_validations\": notable_validations(.validations_info) } } ' -r", "{ \"Hosts validations\": { \"Hosts\": [ { \"id\": \"97ec378c-3568-460c-bc22-df54534ff08f\", \"name\": \"localhost.localdomain\", \"status\": \"insufficient\", \"notable_validations\": [ { \"id\": \"ntp-synced\", \"status\": \"failure\", \"message\": \"Host couldn't synchronize with any NTP server\" }, { \"id\": \"api-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"api-int-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"apps-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" } ] } ] }, \"Cluster validations info\": { \"notable_validations\": [] } }", "curl -X POST -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install\" -H \"Authorization: Bearer USD{API_TOKEN}\"", "curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '{ \"Cluster day-2 hosts\": [ .hosts[] | select(.status != \"installed\") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }'", "{ \"Cluster day-2 hosts\": [ { \"id\": \"a1c52dde-3432-4f59-b2ae-0a530c851480\", \"requested_hostname\": \"control-plane-1\", \"status\": \"added-to-existing-cluster\", \"status_info\": \"Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs\", \"progress\": { \"current_stage\": \"Done\", \"installation_percentage\": 100, \"stage_started_at\": \"2022-07-08T10:56:20.476Z\", \"stage_updated_at\": \"2022-07-08T10:56:20.476Z\" }, \"status_updated_at\": \"2022-07-08T10:56:20.476Z\", \"updated_at\": \"2022-07-08T10:57:15.306369Z\", \"infra_env_id\": \"b74ec0c3-d5b5-4717-a866-5b6854791bd3\", \"cluster_id\": \"8f721322-419d-4eed-aa5b-61b50ea586ae\", \"created_at\": \"2022-07-06T22:54:57.161614Z\" } ] }", "curl -s \"USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID\" -H \"Authorization: Bearer USD{API_TOKEN}\" | jq -c '.[] | {severity, message, event_time, host_id}'", "{\"severity\":\"info\",\"message\":\"Host compute-0: updated status from insufficient to known (Host is ready to be installed)\",\"event_time\":\"2022-07-08T11:21:46.346Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from known to installing (Installation is in progress)\",\"event_time\":\"2022-07-08T11:28:28.647Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing to installing-in-progress (Starting installation)\",\"event_time\":\"2022-07-08T11:28:52.068Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae\",\"event_time\":\"2022-07-08T11:29:47.802Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)\",\"event_time\":\"2022-07-08T11:29:48.259Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host: compute-0, reached installation stage Rebooting\",\"event_time\":\"2022-07-08T11:29:48.261Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"}", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.25.0 compute-1.example.com Ready worker 11m v1.25.0", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"<version-number>-multi\", 1 \"cpu_architecture\" : \"multi\" 2 \"high_availability_mode\": \"full\" 3 \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt --arg cluster_id USD{CLUSTER_ID} ' { \"name\": \"testcluster-infra-env\", \"image_type\":\"full-iso\", \"cluster_id\": USDcluster_id, \"cpu_architecture\" : \"x86_64\" \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"master\" } ' | jq", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.12\", \"cpu_architecture\" : \"arm64\" \"high_availability_mode\": \"full\" \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"worker\" } ' | jq", "oc get nodes -o wide", "oc get csr | grep Pending", "csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 4h42m v1.24.0+3882f8f worker-1 Ready worker 4h29m v1.24.0+3882f8f master-2 Ready master 4h43m v1.24.0+3882f8f master-3 Ready master 4h27m v1.24.0+3882f8f worker-4 Ready worker 4h30m v1.24.0+3882f8f master-5 Ready master 105s v1.24.0+3882f8f", "apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: custom-master3 namespace: openshift-machine-api annotations: spec: automatedCleaningMode: metadata bootMACAddress: 00:00:00:00:00:02 bootMode: UEFI customDeploy: method: install_coreos externallyProvisioned: true online: true userData: name: master-user-data-managed namespace: openshift-machine-api", "oc create -f <filename>", "oc apply -f <filename>", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: annotations: machine.openshift.io/instance-state: externally provisioned metal3.io/BareMetalHost: openshift-machine-api/custom-master3 finalizers: - machine.machine.openshift.io generation: 3 labels: machine.openshift.io/cluster-api-cluster: test-day2-1-6qv96 machine.openshift.io/cluster-api-machine-role: master machine.openshift.io/cluster-api-machine-type: master name: custom-master3 namespace: openshift-machine-api spec: metadata: {} providerSpec: value: apiVersion: baremetal.cluster.k8s.io/v1alpha1 customDeploy: method: install_coreos hostSelector: {} image: checksum: \"\" url: \"\" kind: BareMetalMachineProviderSpec metadata: creationTimestamp: null userData: name: master-user-data-managed", "oc create -f <filename>", "oc apply -f <filename>", "#!/bin/bash Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. This script will link Machine object and Node object. This is needed in order to have IP address of the Node present in the status of the Machine. set -x set -e machine=\"USD1\" node=\"USD2\" if [ -z \"USDmachine\" -o -z \"USDnode\" ]; then echo \"Usage: USD0 MACHINE NODE\" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH=\"http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts\" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name=\"USD1\" url=\"USD2\" timeout=\"USD3\" shift 3 curl_opts=\"USD@\" echo -n \"Waiting for USDname to respond\" start_time=USD(date +%s) until curl -g -X GET \"USDurl\" \"USD{curl_opts[@]}\" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n \".\" curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo \"\\nTimed out waiting for USDname\" return 1 fi sleep 5 done echo \" Success!\" return 0 } wait_for_json oc_proxy \"USD{HOST_PROXY_API_PATH}\" 10 -H \"Accept: application/json\" -H \"Content-Type: application/json\" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo \"USDmachine_data\" | jq '.metadata.annotations[\"metal3.io/BareMetalHost\"]' | cut -f2 -d/ | sed 's/\"//g') if [ -z \"USDhost\" ]; then echo \"Machine USDmachine is not linked to a host yet.\" 1>&2 exit 1 fi The address structure on the host doesn't match the node, so extract the values we want into separate variables so we can build the patch we need. hostname=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"Hostname\") | .address' | sed 's/\"//g') ipaddr=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"InternalIP\") | .address' | sed 's/\"//g') host_patch=' { \"status\": { \"hardware\": { \"hostname\": \"'USD{hostname}'\", \"nics\": [ { \"ip\": \"'USD{ipaddr}'\", \"mac\": \"00:00:00:00:00:00\", \"model\": \"unknown\", \"speedGbps\": 10, \"vlanId\": 0, \"pxe\": true, \"name\": \"eth1\" } ], \"systemVendor\": { \"manufacturer\": \"Red Hat\", \"productName\": \"product name\", \"serialNumber\": \"\" }, \"firmware\": { \"bios\": { \"date\": \"04/01/2014\", \"vendor\": \"SeaBIOS\", \"version\": \"1.11.0-2.el7\" } }, \"ramMebibytes\": 0, \"storage\": [], \"cpu\": { \"arch\": \"x86_64\", \"model\": \"Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz\", \"clockMegahertz\": 2199.998, \"count\": 4, \"flags\": [] } } } } ' echo \"PATCHING HOST\" echo \"USD{host_patch}\" | jq . curl -s -X PATCH USD{HOST_PROXY_API_PATH}/USD{host}/status -H \"Content-type: application/merge-patch+json\" -d \"USD{host_patch}\" get baremetalhost -n openshift-machine-api -o yaml \"USD{host}\"", "bash link-machine-and-node.sh custom-master3 worker-5", "oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table", "+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+", "oc get clusteroperator etcd", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE etcd 4.11.5 True False False 5h54m", "oc rsh -n openshift-etcd etcd-worker-0 etcdctl endpoint health", "192.168.111.26 is healthy: committed proposal: took = 11.297561ms 192.168.111.25 is healthy: committed proposal: took = 13.892416ms 192.168.111.28 is healthy: committed proposal: took = 11.870755ms", "oc get Nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 6h20m v1.24.0+3882f8f worker-1 Ready worker 6h7m v1.24.0+3882f8f master-2 Ready master 6h20m v1.24.0+3882f8f master-3 Ready master 6h4m v1.24.0+3882f8f worker-4 Ready worker 6h7m v1.24.0+3882f8f master-5 Ready master 99m v1.24.0+3882f8f", "oc get ClusterOperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MSG authentication 4.11.5 True False False 5h57m baremetal 4.11.5 True False False 6h19m cloud-controller-manager 4.11.5 True False False 6h20m cloud-credential 4.11.5 True False False 6h23m cluster-autoscaler 4.11.5 True False False 6h18m config-operator 4.11.5 True False False 6h19m console 4.11.5 True False False 6h4m csi-snapshot-controller 4.11.5 True False False 6h19m dns 4.11.5 True False False 6h18m etcd 4.11.5 True False False 6h17m image-registry 4.11.5 True False False 6h7m ingress 4.11.5 True False False 6h6m insights 4.11.5 True False False 6h12m kube-apiserver 4.11.5 True False False 6h16m kube-controller-manager 4.11.5 True False False 6h16m kube-scheduler 4.11.5 True False False 6h16m kube-storage-version-migrator 4.11.5 True False False 6h19m machine-api 4.11.5 True False False 6h15m machine-approver 4.11.5 True False False 6h19m machine-config 4.11.5 True False False 6h18m marketplace 4.11.5 True False False 6h18m monitoring 4.11.5 True False False 6h4m network 4.11.5 True False False 6h20m node-tuning 4.11.5 True False False 6h18m openshift-apiserver 4.11.5 True False False 6h8m openshift-controller-manager 4.11.5 True False False 6h7m openshift-samples 4.11.5 True False False 6h12m operator-lifecycle-manager 4.11.5 True False False 6h18m operator-lifecycle-manager-catalog 4.11.5 True False False 6h19m operator-lifecycle-manager-pkgsvr 4.11.5 True False False 6h12m service-ca 4.11.5 True False False 6h19m storage 4.11.5 True False False 6h19m", "oc get ClusterVersion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 5h57m Cluster version is 4.11.5", "oc delete bmh -n openshift-machine-api custom-master3", "oc get machine -A", "NAMESPACE NAME PHASE AGE openshift-machine-api custom-master3 Running 14h openshift-machine-api test-day2-1-6qv96-master-0 Failed 20h openshift-machine-api test-day2-1-6qv96-master-1 Running 20h openshift-machine-api test-day2-1-6qv96-master-2 Running 20h openshift-machine-api test-day2-1-6qv96-worker-0-8w7vr Running 19h openshift-machine-api test-day2-1-6qv96-worker-0-rxddj Running 19h", "oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-0 machine.machine.openshift.io \"test-day2-1-6qv96-master-0\" deleted", "oc get nodes", "NAME STATUS ROLES AGE VERSION worker-1 Ready worker 19h v1.24.0+3882f8f master-2 Ready master 20h v1.24.0+3882f8f master-3 Ready master 19h v1.24.0+3882f8f worker-4 Ready worker 19h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f", "oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf", "E0927 07:53:10.597523 1 base_controller.go:272] ClusterMemberRemovalController reconciliation failed: cannot remove member: 192.168.111.23 because it is reported as healthy but it doesn't have a machine nor a node resource", "oc rsh -n openshift-etcd etcd-worker-2 etcdctl member list -w table; etcdctl endpoint health", "+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+ 192.168.111.26 is healthy: committed proposal: took = 10.458132ms 192.168.111.25 is healthy: committed proposal: took = 11.047349ms 192.168.111.28 is healthy: committed proposal: took = 11.414402ms", "oc get nodes", "NAME STATUS ROLES AGE VERSION worker-1 Ready worker 20h v1.24.0+3882f8f master-2 NotReady master 20h v1.24.0+3882f8f master-3 Ready master 20h v1.24.0+3882f8f worker-4 Ready worker 20h v1.24.0+3882f8f master-5 Ready master 15h v1.24.0+3882f8f", "oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf", "E0927 08:24:23.983733 1 base_controller.go:272] DefragController reconciliation failed: cluster is unhealthy: 2 of 3 members are available, worker-2 is unhealthy", "oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table", "+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+", "etcdctl endpoint health", "{\"level\":\"warn\",\"ts\":\"2022-09-27T08:25:35.953Z\",\"logger\":\"client\",\"caller\":\"v3/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000680380/192.168.111.25\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\\\"\"} 192.168.111.28 is healthy: committed proposal: took = 12.465641ms 192.168.111.26 is healthy: committed proposal: took = 12.297059ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster", "oc delete machine -n openshift-machine-api test-day2-1-6qv96-master-2", "oc logs -n openshift-etcd-operator etcd-operator-8668df65d-lvpjf -f", "I0927 08:58:41.249222 1 machinedeletionhooks.go:135] skip removing the deletion hook from machine test-day2-1-6qv96-master-2 since its member is still present with any of: [{InternalIP } {InternalIP 192.168.111.26}]", "oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table", "+--------+---------+--------+--------------+--------------+---------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | LEARNER | +--------+---------+--------+--------------+--------------+---------+ |2c18942f| started |worker-3|192.168.111.26|192.168.111.26| false | |61e2a860| started |worker-2|192.168.111.25|192.168.111.25| false | |ead4f280| started |worker-5|192.168.111.28|192.168.111.28| false | +--------+---------+--------+--------------+--------------+---------+", "etcdctl endpoint health", "{\"level\":\"warn\",\"ts\":\"2022-09-27T10:31:07.227Z\",\"logger\":\"client\",\"caller\":\"v3/retry_interceptor.go:62\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc0000d6e00/192.168.111.25\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing dial tcp 192.168.111.25: connect: no route to host\\\"\"} 192.168.111.28 is healthy: committed proposal: took = 13.038278ms 192.168.111.26 is healthy: committed proposal: took = 12.950355ms 192.168.111.25 is unhealthy: failed to commit proposal: context deadline exceeded Error: unhealthy cluster", "etcdctl member remove 61e2a86084aafa62", "Member 61e2a86084aafa62 removed from cluster 6881c977b97990d7", "etcdctl member list -w table", "+----------+---------+--------+--------------+--------------+-------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +----------+---------+--------+--------------+--------------+-------+ | 2c18942f | started |worker-3|192.168.111.26|192.168.111.26| false | | ead4f280 | started |worker-5|192.168.111.28|192.168.111.28| false | +----------+---------+--------+--------------+--------------+-------+", "oc get csr | grep Pending", "csr-5sd59 8m19s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper <none> Pending csr-xzqts 10s kubernetes.io/kubelet-serving system:node:worker-6 <none> Pending", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 17h v1.24.0+3882f8f master-6 Ready master 2m52s v1.24.0+3882f8f", "oc create bmh -n openshift-machine-api custom-master3", "oc create machine -n openshift-machine-api custom-master3", "#!/bin/bash Credit goes to https://bugzilla.redhat.com/show_bug.cgi?id=1801238. This script will link Machine object and Node object. This is needed in order to have IP address of the Node present in the status of the Machine. set -x set -e machine=\"USD1\" node=\"USD2\" if [ -z \"USDmachine\" -o -z \"USDnode\" ]; then echo \"Usage: USD0 MACHINE NODE\" exit 1 fi uid=USD(echo USDnode | cut -f1 -d':') node_name=USD(echo USDnode | cut -f2 -d':') proxy & proxy_pid=USD! function kill_proxy { kill USDproxy_pid } trap kill_proxy EXIT SIGINT HOST_PROXY_API_PATH=\"http://localhost:8001/apis/metal3.io/v1alpha1/namespaces/openshift-machine-api/baremetalhosts\" function wait_for_json() { local name local url local curl_opts local timeout local start_time local curr_time local time_diff name=\"USD1\" url=\"USD2\" timeout=\"USD3\" shift 3 curl_opts=\"USD@\" echo -n \"Waiting for USDname to respond\" start_time=USD(date +%s) until curl -g -X GET \"USDurl\" \"USD{curl_opts[@]}\" 2> /dev/null | jq '.' 2> /dev/null > /dev/null; do echo -n \".\" curr_time=USD(date +%s) time_diff=USD((USDcurr_time - USDstart_time)) if [[ USDtime_diff -gt USDtimeout ]]; then echo \"\\nTimed out waiting for USDname\" return 1 fi sleep 5 done echo \" Success!\" return 0 } wait_for_json oc_proxy \"USD{HOST_PROXY_API_PATH}\" 10 -H \"Accept: application/json\" -H \"Content-Type: application/json\" addresses=USD(oc get node -n openshift-machine-api USD{node_name} -o json | jq -c '.status.addresses') machine_data=USD(oc get machine -n openshift-machine-api -o json USD{machine}) host=USD(echo \"USDmachine_data\" | jq '.metadata.annotations[\"metal3.io/BareMetalHost\"]' | cut -f2 -d/ | sed 's/\"//g') if [ -z \"USDhost\" ]; then echo \"Machine USDmachine is not linked to a host yet.\" 1>&2 exit 1 fi The address structure on the host doesn't match the node, so extract the values we want into separate variables so we can build the patch we need. hostname=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"Hostname\") | .address' | sed 's/\"//g') ipaddr=USD(echo \"USD{addresses}\" | jq '.[] | select(. | .type == \"InternalIP\") | .address' | sed 's/\"//g') host_patch=' { \"status\": { \"hardware\": { \"hostname\": \"'USD{hostname}'\", \"nics\": [ { \"ip\": \"'USD{ipaddr}'\", \"mac\": \"00:00:00:00:00:00\", \"model\": \"unknown\", \"speedGbps\": 10, \"vlanId\": 0, \"pxe\": true, \"name\": \"eth1\" } ], \"systemVendor\": { \"manufacturer\": \"Red Hat\", \"productName\": \"product name\", \"serialNumber\": \"\" }, \"firmware\": { \"bios\": { \"date\": \"04/01/2014\", \"vendor\": \"SeaBIOS\", \"version\": \"1.11.0-2.el7\" } }, \"ramMebibytes\": 0, \"storage\": [], \"cpu\": { \"arch\": \"x86_64\", \"model\": \"Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz\", \"clockMegahertz\": 2199.998, \"count\": 4, \"flags\": [] } } } } ' echo \"PATCHING HOST\" echo \"USD{host_patch}\" | jq . curl -s -X PATCH USD{HOST_PROXY_API_PATH}/USD{host}/status -H \"Content-type: application/merge-patch+json\" -d \"USD{host_patch}\" get baremetalhost -n openshift-machine-api -o yaml \"USD{host}\"", "bash link-machine-and-node.sh custom-master3 worker-3", "oc rsh -n openshift-etcd etcd-worker-3 etcdctl member list -w table", "+---------+-------+--------+--------------+--------------+-------+ | ID | STATUS| NAME | PEER ADDRS | CLIENT ADDRS |LEARNER| +---------+-------+--------+--------------+--------------+-------+ | 2c18942f|started|worker-3|192.168.111.26|192.168.111.26| false | | ead4f280|started|worker-5|192.168.111.28|192.168.111.28| false | | 79153c5a|started|worker-6|192.168.111.29|192.168.111.29| false | +---------+-------+--------+--------------+--------------+-------+", "oc get clusteroperator etcd", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE etcd 4.11.5 True False False 22h", "oc rsh -n openshift-etcd etcd-worker-3 etcdctl endpoint health", "192.168.111.26 is healthy: committed proposal: took = 9.105375ms 192.168.111.28 is healthy: committed proposal: took = 9.15205ms 192.168.111.29 is healthy: committed proposal: took = 10.277577ms", "oc get Nodes", "NAME STATUS ROLES AGE VERSION worker-1 Ready worker 22h v1.24.0+3882f8f master-3 Ready master 22h v1.24.0+3882f8f worker-4 Ready worker 22h v1.24.0+3882f8f master-5 Ready master 18h v1.24.0+3882f8f master-6 Ready master 40m v1.24.0+3882f8f", "oc get ClusterOperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.11.5 True False False 150m baremetal 4.11.5 True False False 22h cloud-controller-manager 4.11.5 True False False 22h cloud-credential 4.11.5 True False False 22h cluster-autoscaler 4.11.5 True False False 22h config-operator 4.11.5 True False False 22h console 4.11.5 True False False 145m csi-snapshot-controller 4.11.5 True False False 22h dns 4.11.5 True False False 22h etcd 4.11.5 True False False 22h image-registry 4.11.5 True False False 22h ingress 4.11.5 True False False 22h insights 4.11.5 True False False 22h kube-apiserver 4.11.5 True False False 22h kube-controller-manager 4.11.5 True False False 22h kube-scheduler 4.11.5 True False False 22h kube-storage-version-migrator 4.11.5 True False False 148m machine-api 4.11.5 True False False 22h machine-approver 4.11.5 True False False 22h machine-config 4.11.5 True False False 110m marketplace 4.11.5 True False False 22h monitoring 4.11.5 True False False 22h network 4.11.5 True False False 22h node-tuning 4.11.5 True False False 22h openshift-apiserver 4.11.5 True False False 163m openshift-controller-manager 4.11.5 True False False 22h openshift-samples 4.11.5 True False False 22h operator-lifecycle-manager 4.11.5 True False False 22h operator-lifecycle-manager-catalog 4.11.5 True False False 22h operator-lifecycle-manager-pkgsvr 4.11.5 True False False 22h service-ca 4.11.5 True False False 22h storage 4.11.5 True False False 22h", "oc get ClusterVersion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.5 True False 22h Cluster version is 4.11.5" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/assisted_installer_for_openshift_container_platform/expanding-the-cluster
11.7. Expanding Volumes
11.7. Expanding Volumes Warning Do not perform this process if geo-replication is configured. There is a race condition tracked by Bug 1683893 that means data can be lost when converting a volume if geo-replication is enabled. Volumes can be expanded while the trusted storage pool is online and available. For example, you can add a brick to a distributed volume, which increases distribution and adds capacity to the Red Hat Gluster Storage volume. Similarly, you can add a group of bricks to a replicated or distributed replicated volume, which increases the capacity of the Red Hat Gluster Storage volume. When expanding replicated or distributed replicated volumes, the number of bricks being added must be a multiple of the replica count. This also applies to arbitrated volumes. For example, to expand a distributed replicated volume with a replica count of 3, you need to add bricks in multiples of 3 (such as 6, 9, 12, etc.). You can also convert a replica 2 volume into an arbitrated replica 3 volume by following the instructions in Section 5.7.5, "Converting to an arbitrated volume" . Important Converting an existing distribute volume to replicate or distribute-replicate volume is not supported. Expanding a Volume From any server in the trusted storage pool, use the following command to probe the server on which you want to add a new brick: For example: Add the bricks using the following command: For example: Check the volume information using the following command: The command output displays information similar to the following: Rebalance the volume to ensure that files will be distributed to the new brick. Use the rebalance command as described in Section 11.11, "Rebalancing Volumes" . The add-brick command should be followed by a rebalance operation to ensure better utilization of the added bricks. 11.7.1. Expanding a Tiered Volume Warning Tiering is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support tiering in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. You can add a group of bricks to a cold tier volume and to the hot tier volume to increase the capacity of the Red Hat Gluster Storage volume. 11.7.1.1. Expanding a Cold Tier Volume Expanding a cold tier volume is same as a non-tiered volume. If you are reusing the brick, ensure to perform the steps listed in " Section 5.3.3, " Reusing a Brick from a Deleted Volume " " section. Detach the tier by performing the steps listed in Section 16.7, "Detaching a Tier from a Volume (Deprecated)" From any server in the trusted storage pool, use the following command to probe the server on which you want to add a new brick : For example: Add the bricks using the following command: For example: Rebalance the volume to ensure that files will be distributed to the new brick. Use the rebalance command as described in Section 11.11, "Rebalancing Volumes" . The add-brick command should be followed by a rebalance operation to ensure better utilization of the added bricks. Reattach the tier to the volume with both old and new (expanded) bricks: # gluster volume tier VOLNAME attach [replica COUNT] NEW-BRICK... Important When you reattach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities. If you are reusing the brick, be sure to clearly wipe the existing data before attaching it to the tiered volume. 11.7.1.2. Expanding a Hot Tier Volume You can expand a hot tier volume by attaching and adding bricks for the hot tier. Detach the tier by performing the steps listed in Section 16.7, "Detaching a Tier from a Volume (Deprecated)" Reattach the tier to the volume with both old and new (expanded) bricks: # gluster volume tier VOLNAME attach [replica COUNT] NEW-BRICK... For example, Important When you reattach a tier, an internal process called fix-layout commences internally to prepare the hot tier for use. This process takes time and there will a delay in starting the tiering activities. If you are reusing the brick, be sure to clearly wipe the existing data before attaching it to the tiered volume. 11.7.2. Expanding a Dispersed or Distributed-dispersed Volume Expansion of a dispersed or distributed-dispersed volume can be done by adding new bricks. The number of additional bricks should be in multiple of basic configuration of the volume. For example, if you have a volume with configuration (4+2 = 6), then you must only add 6 (4+2) or multiple of 6 bricks (such as 12, 18, 24 and so on). Note If you add bricks to a Dispersed volume, it will be converted to a Distributed-Dispersed volume, and the existing dispersed volume will be treated as dispersed subvolume. From any server in the trusted storage pool, use the following command to probe the server on which you want to add new bricks: For example: Add the bricks using the following command: For example: (Optional) View the volume information after adding the bricks: For example: Rebalance the volume to ensure that the files will be distributed to the new brick. Use the rebalance command as described in Section 11.11, "Rebalancing Volumes" . The add-brick command should be followed by a rebalance operation to ensure better utilization of the added bricks. 11.7.3. Expanding Underlying Logical Volume You can expand the size of a logical volume using the lvextend command. Red Hat recommends following this process when you want to increase the storage capacity of replicated, arbitrated-replicated, or dispersed volumes, but not expanding distributed-replicated, arbitrated-distributed-replicated, or distributed-dispersed volumes. Warning It is recommended to involve the Red Hat Support team while performing this operation. In the case of online logical volume extent, ensure the associated brick process is killed manually. It might occur certain operations are consuming data, or reading or writing a file on an associated brick. Proceeding with the extension before killing the brick process can have an adverse effect on performance. Identify the brick process ID and kill the same using the following command: Stop all volumes using the brick with the following command: Check if new disk is visible using lsblk command: Create new physical volume using following command: Use the following command to verify if the physical volume is created: Extend the existing volume group: Use the following commands to check the size of volume group, and verify if it reflects the new addition: Ensure the volume group created has enough space to extend the logical volume: Retrieve the file system name using the following command: Extend the logical volume using the following command: In case of thin pool, extend the pool using the following command: In the above commands, n is the additional size in GB to be extended. Execute the #lvdisplay command to fetch the pool name. Use the following command to check if the logical volume is extended: Execute the following command to expand the filesystem to accommodate the extended logical volume: Remount the file system using the following command: Start all the volumes with force option:
[ "gluster peer probe HOSTNAME", "gluster peer probe server5 Probe successful gluster peer probe server6 Probe successful", "gluster volume add-brick VOLNAME NEW_BRICK", "gluster volume add-brick test-volume server5:/rhgs/brick5/ server6:/rhgs/brick6/ Add Brick successful", "gluster volume info", "Volume Name: test-volume Type: Distribute-Replicate Status: Started Number of Bricks: 6 Bricks: Brick1: server1:/rhgs/brick1 Brick2: server2:/rhgs/brick2 Brick3: server3:/rhgs/brick3 Brick4: server4:/rhgs/brick4 Brick5: server5:/rhgs/brick5 Brick6: server6:/rhgs/brick6", "gluster peer probe HOSTNAME", "gluster peer probe server5 Probe successful gluster peer probe server6 Probe successful", "gluster volume add-brick VOLNAME NEW_BRICK", "gluster volume add-brick test-volume server5:/rhgs/brick5/ server6:/rhgs/brick6/", "gluster volume tier test-volume attach replica 3 server1:/rhgs/tier5 server2:/rhgs/tier6 server1:/rhgs/tier7 server2:/rhgs/tier8", "gluster peer probe HOSTNAME", "gluster peer probe server4 Probe successful gluster peer probe server5 Probe successful gluster peer probe server6 Probe successful", "gluster volume add-brick VOLNAME NEW_BRICK", "gluster volume add-brick test-volume server4:/rhgs/brick7 server4:/rhgs/brick8 server5:/rhgs/brick9 server5:/rhgs/brick10 server6:/rhgs/brick11 server6:/rhgs/brick12", "gluster volume info VOLNAME", "gluster volume info test-volume Volume Name: test-volume Type: Distributed-Disperse Volume ID: 2be607f2-f961-4c4b-aa26-51dcb48b97df Status: Started Snapshot Count: 0 Number of Bricks: 2 x (4 + 2) = 12 Transport-type: tcp Bricks: Brick1: server1:/rhgs/brick1 Brick2: server1:/rhgs/brick2 Brick3: server2:/rhgs/brick3 Brick4: server2:/rhgs/brick4 Brick5: server3:/rhgs/brick5 Brick6: server3:/rhgs/brick6 Brick7: server4:/rhgs/brick7 Brick8: server4:/rhgs/brick8 Brick9: server5:/rhgs/brick9 Brick10: server5:/rhgs/brick10 Brick11: server6:/rhgs/brick11 Brick12: server6:/rhgs/brick12 Options Reconfigured: transport.address-family: inet performance.readdir-ahead: on nfs.disable: on", "gluster volume status kill -9 brick-process-id", "gluster volume stop VOLNAME", "lsblk", "pvcreate /dev/ PHYSICAL_VOLUME_NAME", "pvs", "vgextend VOLUME_GROUP_NAME /dev/ PHYSICAL_VOLUME_NAME", "vgscan", "vgdisplay VOLUME_GROUP_NAME", "df -h", "lvextend -L+ n G /dev/mapper/ LOGICAL_VOLUME_NAME - VOLUME_GROUP_NAME", "lvextend -L+ n G VOLUME_GROUP_NAME/POOL_NAME", "lvdisplay VOLUME_GROUP_NAME", "xfs_growfs /dev/ VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME", "mount -o remount /dev/ VOLUME_GROUP_NAME / LOGICAL_VOLUME_NAME /bricks/ path_to_brick", "gluster volume start VOLNAME force" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/expanding_volumes
25.2. Creating an iSCSI Initiator
25.2. Creating an iSCSI Initiator In Red Hat Enterprise Linux 7, the iSCSI service is lazily started by default: the service starts after running the iscsiadm command. Procedure 25.7. Creating an iSCSI Initiator Install iscsi-initiator-utils : If the ACL was given a custom name in Section 25.1.6, "Configuring ACLs" , modify the /etc/iscsi/initiatorname.iscsi file accordingly. For example: Discover the target: Log in to the target with the target IQN you discovered in step 3: This procedure can be followed for any number of initators connected to the same LUN so long as their specific initiator names are added to the ACL as described in Section 25.1.6, "Configuring ACLs" . Find the iSCSI disk name and create a file system on this iSCSI disk: Replace disk_name with the iSCSI disk name displayed in /var/log/messages . Mount the file system: Replace /mount/point with the mount point of the partition. Edit the /etc/fstab to mount the file system automatically when the system boots: Replace disk_name with the iSCSI disk name. Log off from the target:
[ "yum install iscsi-initiator-utils -y", "cat /etc/iscsi/initiatorname.iscsi InitiatorName=iqn.2006-04.com.example.node1 # vi /etc/iscsi/initiatorname.iscsi", "iscsiadm -m discovery -t st -p target-ip-address 10.64.24.179:3260,1 iqn.2006-04.com.example:3260", "iscsiadm -m node -T iqn.2006-04.com.example:3260 -l Logging in to [iface: default, target: iqn.2006-04.com.example:3260, portal: 10.64.24.179,3260] (multiple) Login to [iface: default, target: iqn.2006-04.com.example:3260, portal: 10.64.24.179,3260] successful.", "grep \"Attached SCSI\" /var/log/messages # mkfs.ext4 /dev/ disk_name", "mkdir /mount/point # mount /dev/ disk_name /mount/point", "vim /etc/fstab /dev/ disk_name /mount/point ext4 _netdev 0 0", "iscsiadm -m node -T iqn.2006-04.com.example:3260 -u" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/osm-create-iscsi-initiator
6.6. Removing Lost Physical Volumes from a Volume Group
6.6. Removing Lost Physical Volumes from a Volume Group If you lose a physical volume, you can activate the remaining physical volumes in the volume group with the --partial argument of the vgchange command. You can remove all the logical volumes that used that physical volume from the volume group with the --removemissing argument of the vgreduce command. It is recommended that you run the vgreduce command with the --test argument to verify what you will be destroying. Like most LVM operations, the vgreduce command is reversible in a sense if you immediately use the vgcfgrestore command to restore the volume group metadata to its state. For example, if you used the --removemissing argument of the vgreduce command without the --test argument and find you have removed logical volumes you wanted to keep, you can still replace the physical volume and use another vgcfgrestore command to return the volume group to its state.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/lost_PV_remove_from_VG
7.10. bacula
7.10. bacula 7.10.1. RHBA-2012:1469 - bacula bug fix update Updated bacula packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. The bacula packages provide a tool set that allows you to manage the backup, recovery, and verification of computer data across a network of different computers. Bug Fixes BZ# 728693 Prior to this update, the logwatch tool did not check the "/var/log/bacula*" file. As a consequence, the logwatch report was incomplete. This update adds all log files to the logwatch configuration file. Now, the logwatch report is complete. BZ# 728697 Prior to this update, the bacula tool itself created the "/var/spool/bacula/log" file. As a consequence, this log file used an incorrect SELinux context. This update modifies the underlying code to create the /var/spool/bacula/log file in the bacula package. Now, this log file has the correct SELinux context. BZ# 729008 Prior to this update, the bacula packages were built without the CFLAGS variable "USDRPM_OPT_FLAGS". As a consequence, the debug information was not generated. This update modifies the underlying code to build the packages with CFLAGS="USDRPM_OPT_FLAGS. Now, the debug information is generated as expected. BZ# 756803 Prior to this update, the perl script which generates the my.conf file contained a misprint. As a consequence, the port variable was not set correctly. This update corrects the misprint. Now, the port variable is set as expected. BZ#802158 Prior to this update, values for the "show pool" command was obtained from the "res->res_client" item. As a consequence, the output displayed incorrect job and file retention values. This update uses the "res->res_pool" item to obtain the correct values. BZ# 862240 Prior to this update, bacula-storage-common utility wrongly removed alternatives for the bcopy function during the update. As a consequence, the Link to bcop.{mysql,sqlite,postgresql} disappeared after updating. This update modifies the underlying code to remove these links directly in storage-{mysql,sqlite,postgresql} and not in bacula-storage-common. All users of bacula are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/bacula